perm filename PUI1S.PUI[LET,JMC]1 blob
sn#391805 filedate 1978-10-29 generic text, type T, neo UTF8
1⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16400⎇␈←
2⎇⎇␈F1␈→1216⎇0⎇2⎇0⎇⎇draft␈←
3⎇⎇␈F3␈→551⎇0⎇2⎇0⎇⎇ASCRIBING MENTAL QUALITIES TO MACHINES␈←
4⎇⎇␈F1Abstract:⎇Ascribing⎇mental⎇qualities⎇like⎇␈F2beliefs,␈F1⎇␈F2intentions␈F1⎇and⎇␈F2wants␈F1⎇to⎇a⎇machine⎇is⎇sometimes
5⎇⎇␈F1correct⎇if⎇done⎇conservatively⎇and⎇is⎇sometimes⎇necessary⎇to⎇express⎇what⎇is⎇known⎇about⎇its⎇state.
6⎇⎇␈F1We⎇propose⎇some⎇new⎇definitional⎇tools⎇for⎇this:⎇definitions⎇relative⎇to⎇an⎇approximate⎇theory⎇and
7⎇⎇␈F1second⎇order⎇structural⎇definitions.⎇ This⎇paper⎇is⎇to⎇be⎇published⎇in⎇␈F2Philosophical⎇Perspectives⎇in
8⎇⎇␈F2Artificial⎇Intelligence␈F1⎇edited⎇by⎇Martin⎇Ringle⎇and⎇to⎇be⎇published⎇by⎇Humanities⎇Press.
9⎇⎇␈F1(this⎇draft⎇of⎇MENTAL[F76,JMC]@SU-AI⎇compiled⎇at⎇17:58⎇on⎇October⎇29,⎇1978)
10⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16401⎇␈←
11⎇⎇␈F3INTRODUCTION
12⎇⎇␈F1␈=80⎇To⎇ascribe⎇certain⎇␈F2beliefs␈F1,⎇␈F2knowledge␈F1,⎇␈F2free⎇will␈F1,⎇␈F2intentions␈F1,⎇␈F2consciousness␈F1,⎇␈F2abilities␈F1⎇or⎇␈F2wants␈F1⎇to
13⎇⎇␈F1a⎇machine⎇or⎇computer⎇program⎇is⎇␈F3legitimate␈F1⎇when⎇such⎇an⎇ascription⎇expresses⎇the⎇same
14⎇⎇␈F1information⎇about⎇the⎇machine⎇that⎇it⎇expresses⎇about⎇a⎇person.⎇ It⎇is⎇␈F3useful␈F1⎇when⎇the⎇ascription
15⎇⎇␈F1helps⎇us⎇understand⎇the⎇structure⎇of⎇the⎇machine,⎇its⎇past⎇or⎇future⎇behavior,⎇or⎇how⎇to⎇repair⎇or
16⎇⎇␈F1improve⎇it.⎇ It⎇is⎇perhaps⎇never⎇␈F3logically⎇required␈F1⎇even⎇for⎇humans,⎇but⎇expressing⎇reasonably
17⎇⎇␈F1briefly⎇what⎇is⎇actually⎇known⎇about⎇the⎇state⎇of⎇a⎇machine⎇in⎇a⎇particular⎇situation⎇may⎇require
18⎇⎇␈F1ascribing⎇mental⎇qualities⎇or⎇qualities⎇isomorphic⎇to⎇them␈F51␈F1.⎇ Theories⎇of⎇belief,⎇knowledge⎇and
19⎇⎇␈F1wanting⎇can⎇be⎇constructed⎇for⎇machines⎇in⎇a⎇simpler⎇setting⎇than⎇for⎇humans⎇and⎇later⎇applied
20⎇⎇␈F1to⎇humans.⎇ Ascription⎇of⎇mental⎇qualities⎇is⎇␈F3most⎇straightforward␈F1⎇for⎇machines⎇of⎇known
21⎇⎇␈F1structure⎇such⎇as⎇thermostats⎇and⎇computer⎇operating⎇systems,⎇but⎇is⎇␈F3most⎇useful␈F1⎇when⎇applied⎇to
22⎇⎇␈F1entities⎇whose⎇structure⎇is⎇very⎇incompletely⎇known.
23⎇⎇␈F1␈=80⎇These⎇views⎇are⎇motivated⎇by⎇work⎇in⎇artificial⎇intelligence␈F52␈F1⎇(abbreviated⎇AI).⎇ They⎇can⎇be
24⎇⎇␈F1taken⎇as⎇asserting⎇that⎇many⎇of⎇the⎇philosophical⎇problems⎇of⎇mind⎇take⎇a⎇concrete⎇form⎇when
25⎇⎇␈F1one⎇takes⎇seriously⎇the⎇idea⎇of⎇making⎇machines⎇behave⎇intelligently.⎇ In⎇particular,⎇AI⎇raises⎇for
26⎇⎇␈F1machines⎇two⎇issues⎇that⎇have⎇heretofore⎇been⎇considered⎇only⎇in⎇connection⎇with⎇people.
27⎇⎇␈F1␈=80⎇First,⎇in⎇designing⎇intelligent⎇programs⎇and⎇looking⎇at⎇them⎇from⎇the⎇outside⎇we⎇need⎇to
28⎇⎇␈F1determine⎇the⎇conditions⎇under⎇which⎇specific⎇mental⎇and⎇volitional⎇terms⎇are⎇applicable.⎇ We⎇can
29⎇⎇␈F1exemplify⎇these⎇problems⎇by⎇asking⎇when⎇might⎇it⎇be⎇legitimate⎇to⎇say⎇about⎇a⎇machine,⎇␈F2"⎇It
30⎇⎇␈F2knows⎇I⎇want⎇a⎇reservation⎇to⎇Boston,⎇and⎇it⎇can⎇give⎇it⎇to⎇me,⎇but⎇it⎇won't"␈F1.
31⎇⎇␈F1␈=80⎇Second,⎇when⎇we⎇want⎇a⎇␈F3generally⎇intelligent␈F1␈F53␈F1⎇computer⎇program,⎇we⎇must⎇build⎇into⎇it⎇a
32⎇⎇␈F1␈F3general⎇view␈F1⎇of⎇what⎇the⎇world⎇is⎇like⎇with⎇especial⎇attention⎇to⎇facts⎇about⎇how⎇the⎇information
33⎇⎇␈F1required⎇to⎇solve⎇problems⎇is⎇to⎇be⎇obtained⎇and⎇used.⎇ Thus⎇we⎇must⎇provide⎇it⎇with⎇some⎇kind
34⎇⎇␈F1of⎇␈F2metaphysics␈F1⎇(general⎇world-view)⎇and⎇␈F2epistemology␈F1⎇(theory⎇of⎇knowledge)⎇however⎇naive.
35⎇⎇␈F1␈=80⎇As⎇much⎇as⎇possible,⎇we⎇will⎇ascribe⎇mental⎇qualities⎇separately⎇from⎇each⎇other⎇instead⎇of
36⎇⎇␈F1bundling⎇them⎇in⎇a⎇concept⎇of⎇mind.⎇ This⎇is⎇necessary,⎇because⎇present⎇machines⎇have⎇rather
37⎇⎇␈F1varied⎇little⎇minds;⎇the⎇mental⎇qualities⎇that⎇can⎇legitimately⎇be⎇ascribed⎇to⎇them⎇are⎇few⎇and
38⎇⎇␈F1differ⎇from⎇machine⎇to⎇machine.⎇ We⎇will⎇not⎇even⎇try⎇to⎇meet⎇objections⎇like,⎇␈F2"Unless⎇it⎇also⎇does
39⎇⎇␈F2X,⎇it⎇is⎇illegitimate⎇to⎇speak⎇of⎇its⎇having⎇mental⎇qualities."␈F1
40⎇⎇␈F1␈=80⎇Machines⎇as⎇simple⎇as⎇thermostats⎇can⎇be⎇said⎇to⎇have⎇beliefs,⎇and⎇having⎇beliefs⎇seems⎇to
41⎇⎇␈F1be⎇a⎇characteristic⎇of⎇most⎇machines⎇capable⎇of⎇problem⎇solving⎇performance.⎇ However,⎇the
42⎇⎇␈F1machines⎇mankind⎇has⎇so⎇far⎇found⎇it⎇useful⎇to⎇construct⎇rarely⎇have⎇beliefs⎇about⎇beliefs,
43⎇⎇␈F1although⎇such⎇beliefs⎇will⎇be⎇needed⎇by⎇computer⎇programs⎇that⎇reason⎇about⎇what⎇knowledge
44⎇⎇␈F1they⎇lack⎇and⎇where⎇to⎇get⎇it.⎇ Mental⎇qualities⎇peculiar⎇to⎇human-like⎇motivational⎇structures␈F54␈F1,
45⎇⎇␈F1such⎇as⎇love⎇and⎇hate,⎇will⎇not⎇be⎇required⎇for⎇intelligent⎇behavior,⎇but⎇we⎇could⎇probably
46⎇⎇␈F1program⎇computers⎇to⎇exhibit⎇them⎇if⎇we⎇wanted⎇to,⎇because⎇our⎇common⎇sense⎇notions⎇about
47⎇⎇␈F1them⎇translate⎇readily⎇into⎇certain⎇program⎇and⎇data⎇structures.⎇ Still⎇other⎇mental⎇qualities,⎇e.g.
48⎇⎇␈F1humor⎇and⎇appreciation⎇of⎇beauty,⎇seem⎇much⎇harder⎇to⎇model.⎇ While⎇we⎇will⎇be⎇quite⎇liberal⎇in
49⎇⎇␈F1ascribing⎇␈F2some␈F1⎇mental⎇qualities⎇even⎇to⎇rather⎇primitive⎇machines,⎇we⎇will⎇try⎇to⎇be⎇conservative
50⎇⎇␈F1in⎇our⎇criteria⎇for⎇ascribing⎇any⎇␈F2particular␈F1⎇quality.
51⎇⎇␈F1␈=80⎇The⎇successive⎇sections⎇of⎇this⎇paper⎇will⎇give⎇philosophical⎇and⎇AI⎇reasons⎇for⎇ascribing
52⎇⎇␈F1beliefs⎇to⎇machines,⎇two⎇new⎇forms⎇of⎇definition⎇that⎇seem⎇necessary⎇for⎇defining⎇mental⎇qualities
53⎇⎇␈F1and⎇examples⎇of⎇their⎇use,⎇examples⎇of⎇systems⎇to⎇which⎇mental⎇qualities⎇are⎇ascribed,⎇some⎇first
54⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16406⎇␈←
55⎇⎇␈F1attempts⎇at⎇defining⎇a⎇variety⎇of⎇mental⎇qualities,⎇some⎇comments⎇on⎇other⎇views⎇on⎇mental
56⎇⎇␈F1qualities,⎇notes,⎇and⎇references.
57⎇⎇␈F1␈=80⎇This⎇paper⎇is⎇exploratory⎇and⎇its⎇presentation⎇is⎇non-technical.⎇ Any⎇axioms⎇that⎇are
58⎇⎇␈F1presented⎇are⎇illustrative⎇and⎇not⎇part⎇of⎇an⎇axiomatic⎇system⎇proposed⎇as⎇a⎇serious⎇candidate⎇for
59⎇⎇␈F1AI⎇or⎇philosophical⎇use.⎇ This⎇is⎇regrettable⎇for⎇two⎇reasons.⎇ First,⎇AI⎇use⎇of⎇these⎇concepts
60⎇⎇␈F1requires⎇formal⎇axiomatization.⎇ Second,⎇the⎇lack⎇of⎇formalism⎇focusses⎇attention⎇on⎇whether⎇the
61⎇⎇␈F1paper⎇correctly⎇characterizes⎇mental⎇qualities⎇rather⎇than⎇on⎇the⎇formal⎇properties⎇of⎇the⎇theories
62⎇⎇␈F1proposed.⎇ I⎇think⎇we⎇can⎇attain⎇a⎇situation⎇like⎇that⎇in⎇the⎇foundations⎇of⎇mathematics,⎇wherein
63⎇⎇␈F1the⎇controversies⎇about⎇whether⎇to⎇take⎇an⎇intuitionist⎇or⎇classical⎇point⎇of⎇view⎇have⎇been⎇mainly
64⎇⎇␈F1replaced⎇by⎇technical⎇studies⎇of⎇intuitionist⎇and⎇classical⎇theories⎇and⎇the⎇relations⎇between⎇them.
65⎇⎇␈F1In⎇future⎇work,⎇I⎇hope⎇to⎇treat⎇these⎇matters⎇more⎇formally⎇along⎇the⎇lines⎇of⎇(McCarthy⎇1977a
66⎇⎇␈F1and⎇1977b).⎇ This⎇won't⎇eliminate⎇controversy⎇about⎇the⎇true⎇nature⎇of⎇mental⎇qualities,⎇but⎇I
67⎇⎇␈F1believe⎇that⎇their⎇eventual⎇resolution⎇requires⎇more⎇technical⎇knowledge⎇than⎇is⎇now⎇available.
68⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16407⎇␈←
69⎇⎇␈F3␈→725⎇0⎇2⎇0⎇⎇WHY ASCRIBE MENTAL QUALITIES?␈←
70⎇⎇␈F1␈=80⎇␈F3Why⎇should⎇we⎇want⎇to⎇ascribe⎇beliefs⎇to⎇machines⎇at⎇all?␈F1⎇This⎇is⎇the⎇converse⎇question
71⎇⎇␈F1to⎇that⎇of⎇␈F2reductionism␈F1.⎇ Instead⎇of⎇asking⎇how⎇mental⎇qualities⎇can⎇be⎇␈F3reduced␈F1⎇to⎇physical⎇ones,
72⎇⎇␈F1we⎇ask⎇how⎇to⎇␈F3ascribe␈F1⎇mental⎇qualities⎇to⎇physical⎇systems.
73⎇⎇␈F1␈=80⎇Our⎇general⎇motivation⎇for⎇ascribing⎇mental⎇qualities⎇is⎇the⎇same⎇as⎇for⎇ascribing⎇any⎇other
74⎇⎇␈F1qualities⎇-⎇namely⎇to⎇express⎇available⎇information⎇about⎇the⎇machine⎇and⎇its⎇current⎇state.⎇ To
75⎇⎇␈F1have⎇information,⎇we⎇must⎇have⎇a⎇space⎇of⎇possibilities⎇whether⎇explicitly⎇described⎇or⎇not.⎇ The
76⎇⎇␈F1ascription⎇must⎇therefore⎇must⎇serve⎇to⎇distinguish⎇the⎇present⎇state⎇of⎇the⎇machine⎇from⎇past⎇or
77⎇⎇␈F1future⎇states⎇or⎇from⎇the⎇state⎇the⎇machine⎇would⎇have⎇in⎇other⎇conditions⎇or⎇from⎇the⎇state⎇of
78⎇⎇␈F1other⎇machines.⎇ Therefore,⎇the⎇issue⎇is⎇whether⎇ascription⎇of⎇mental⎇qualities⎇is⎇helpful⎇in
79⎇⎇␈F1making⎇these⎇discriminations⎇in⎇the⎇case⎇of⎇machines.
80⎇⎇␈F1␈=80⎇To⎇put⎇the⎇issue⎇sharply,⎇consider⎇a⎇computer⎇program⎇for⎇which⎇we⎇possess⎇complete
81⎇⎇␈F1listings.⎇ The⎇behavior⎇of⎇the⎇program⎇in⎇any⎇environment⎇is⎇determined⎇from⎇the⎇structure⎇of⎇the
82⎇⎇␈F1program⎇and⎇can⎇be⎇found⎇out⎇by⎇simulating⎇the⎇action⎇of⎇the⎇program⎇and⎇the⎇environment
83⎇⎇␈F1without⎇having⎇to⎇deal⎇with⎇any⎇concept⎇of⎇belief.⎇ Nevertheless,⎇there⎇are⎇several⎇reasons⎇for
84⎇⎇␈F1ascribing⎇belief⎇and⎇other⎇mental⎇qualities:
85⎇⎇␈F1␈=80⎇1.⎇Although⎇we⎇may⎇know⎇the⎇program,⎇its⎇state⎇at⎇a⎇given⎇moment⎇is⎇usually⎇not⎇directly
86⎇⎇␈F1observable,⎇and⎇the⎇facts⎇we⎇can⎇obtain⎇about⎇its⎇current⎇state⎇may⎇be⎇more⎇readily⎇expressed⎇by
87⎇⎇␈F1ascribing⎇certain⎇beliefs⎇and⎇goals⎇than⎇in⎇any⎇other⎇way.
88⎇⎇␈F1␈=80⎇2.⎇Even⎇if⎇we⎇can⎇simulate⎇its⎇interaction⎇with⎇its⎇environment⎇using⎇another⎇more
89⎇⎇␈F1comprehensive⎇program,⎇the⎇simulation⎇may⎇be⎇a⎇billion⎇times⎇too⎇slow.⎇ We⎇also⎇may⎇not⎇have
90⎇⎇␈F1the⎇initial⎇conditions⎇of⎇the⎇environment⎇or⎇the⎇environment's⎇laws⎇of⎇motion⎇in⎇a⎇suitable⎇form,
91⎇⎇␈F1whereas⎇it⎇may⎇be⎇feasible⎇to⎇make⎇a⎇prediction⎇of⎇the⎇effects⎇of⎇the⎇beliefs⎇we⎇ascribe⎇to⎇the
92⎇⎇␈F1program⎇without⎇any⎇computer⎇at⎇all.
93⎇⎇␈F1␈=80⎇3.⎇Ascribing⎇beliefs⎇may⎇allow⎇deriving⎇general⎇statements⎇about⎇the⎇program's⎇behavior
94⎇⎇␈F1that⎇could⎇not⎇be⎇obtained⎇from⎇any⎇finite⎇number⎇of⎇simulations.
95⎇⎇␈F1␈=80⎇4.⎇ The⎇belief⎇and⎇goal⎇structures⎇we⎇ascribe⎇to⎇the⎇program⎇may⎇be⎇easier⎇to⎇understand
96⎇⎇␈F1than⎇the⎇details⎇of⎇program⎇as⎇expressed⎇in⎇its⎇listing.
97⎇⎇␈F1␈=80⎇5.⎇The⎇belief⎇and⎇goal⎇structure⎇is⎇likely⎇to⎇be⎇close⎇to⎇the⎇structure⎇the⎇designer⎇of⎇the
98⎇⎇␈F1program⎇had⎇in⎇mind,⎇and⎇it⎇may⎇be⎇easier⎇to⎇debug⎇the⎇program⎇in⎇terms⎇of⎇this⎇structure⎇than
99⎇⎇␈F1directly⎇from⎇the⎇listing.⎇ In⎇fact,⎇it⎇is⎇often⎇possible⎇for⎇someone⎇to⎇correct⎇a⎇fault⎇by⎇reasoning⎇in
100⎇⎇␈F1general⎇terms⎇about⎇the⎇information⎇in⎇a⎇program⎇or⎇machine,⎇diagnosing⎇what⎇is⎇wrong⎇as⎇a
101⎇⎇␈F1false⎇belief,⎇and⎇looking⎇at⎇the⎇details⎇of⎇the⎇program⎇or⎇machine⎇only⎇sufficiently⎇to⎇determine
102⎇⎇␈F1how⎇the⎇false⎇belief⎇is⎇represented⎇and⎇what⎇mechanism⎇caused⎇it⎇to⎇arise.
103⎇⎇␈F1␈=80⎇6.⎇The⎇difference⎇between⎇this⎇program⎇and⎇another⎇actual⎇or⎇hypothetical⎇program⎇may
104⎇⎇␈F1best⎇be⎇expressed⎇as⎇a⎇difference⎇in⎇belief⎇structure.
105⎇⎇␈F1␈=80⎇All⎇the⎇above⎇reasons⎇for⎇ascribing⎇beliefs⎇are⎇epistemological;⎇i.e.⎇ascribing⎇beliefs⎇is⎇needed
106⎇⎇␈F1to⎇adapt⎇to⎇limitations⎇on⎇our⎇ability⎇to⎇acquire⎇knowledge,⎇use⎇it⎇for⎇prediction,⎇and⎇establish
107⎇⎇␈F1generalizations⎇in⎇terms⎇of⎇the⎇elementary⎇structure⎇of⎇the⎇program.⎇ Perhaps⎇this⎇is⎇the⎇general
108⎇⎇␈F1reason⎇for⎇ascribing⎇higher⎇levels⎇of⎇organization⎇to⎇systems.
109⎇⎇␈F1␈=80⎇Computers⎇give⎇rise⎇to⎇numerous⎇examples⎇of⎇building⎇a⎇higher⎇structure⎇on⎇the⎇basis⎇of⎇a
110⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16408⎇␈←
111⎇⎇␈F1lower⎇and⎇conducting⎇subsequent⎇analyses⎇using⎇the⎇higher⎇structure.⎇ The⎇geometry⎇of⎇the⎇electric
112⎇⎇␈F1fields⎇in⎇a⎇transistor⎇and⎇its⎇chemical⎇composition⎇give⎇rise⎇to⎇its⎇properties⎇as⎇an⎇electric⎇circuit
113⎇⎇␈F1element.⎇ Transistors⎇are⎇combined⎇in⎇small⎇circuits⎇and⎇powered⎇in⎇standard⎇ways⎇to⎇make⎇logical
114⎇⎇␈F1elements⎇such⎇as⎇ANDs,⎇ORs,⎇NOTs⎇and⎇flip-flops.⎇ Computers⎇are⎇designed⎇with⎇these⎇logical
115⎇⎇␈F1elements⎇to⎇obey⎇a⎇desired⎇order⎇code;⎇the⎇designer⎇usually⎇needn't⎇consider⎇the⎇properties⎇of⎇the
116⎇⎇␈F1transistors⎇as⎇circuit⎇elements.⎇ When⎇writing⎇a⎇compiler⎇from⎇a⎇higher⎇level⎇language,⎇one⎇works
117⎇⎇␈F1with⎇the⎇order⎇code⎇and⎇doesn't⎇have⎇to⎇know⎇about⎇the⎇ANDs⎇and⎇ORs;⎇the⎇user⎇of⎇the⎇higher
118⎇⎇␈F1order⎇language⎇needn't⎇know⎇the⎇computer's⎇order⎇code.
119⎇⎇␈F1␈=80⎇In⎇the⎇above⎇cases,⎇users⎇of⎇the⎇higher⎇level⎇can⎇completely⎇ignore⎇the⎇lower⎇level,⎇because
120⎇⎇␈F1the⎇behavior⎇of⎇the⎇higher⎇level⎇system⎇is⎇completely⎇determined⎇by⎇the⎇values⎇of⎇the⎇higher⎇level
121⎇⎇␈F1variables;⎇e.g.⎇ in⎇order⎇to⎇determine⎇the⎇outcome⎇of⎇a⎇computer⎇program,⎇one⎇needn't⎇consider⎇the
122⎇⎇␈F1flip-flops.⎇ However,⎇when⎇we⎇ascribe⎇mental⎇structure⎇to⎇humans⎇or⎇goals⎇to⎇society,⎇we⎇always
123⎇⎇␈F1get⎇highly⎇incomplete⎇systems;⎇the⎇higher⎇level⎇behavior⎇cannot⎇be⎇fully⎇predicted⎇from⎇higher
124⎇⎇␈F1level⎇observations⎇and⎇higher⎇level⎇"laws"⎇even⎇when⎇the⎇underlying⎇lower⎇level⎇behavior⎇is
125⎇⎇␈F1determinate.⎇ Moreover,⎇at⎇a⎇given⎇state⎇of⎇science⎇and⎇technology,⎇different⎇kinds⎇of⎇information
126⎇⎇␈F1can⎇be⎇obtained⎇from⎇experiment⎇and⎇theory⎇building⎇at⎇the⎇different⎇levels⎇of⎇organization.
127⎇⎇␈F1␈=80⎇In⎇order⎇to⎇program⎇a⎇computer⎇to⎇obtain⎇information⎇and⎇co-operation⎇from⎇people⎇and
128⎇⎇␈F1other⎇machines,⎇we⎇will⎇have⎇to⎇make⎇it⎇ascribe⎇knowledge,⎇belief,⎇and⎇wants⎇to⎇other⎇machines
129⎇⎇␈F1and⎇people.⎇ For⎇example,⎇a⎇program⎇that⎇plans⎇trips⎇will⎇have⎇to⎇ascribe⎇knowledge⎇to⎇travel
130⎇⎇␈F1agents⎇and⎇to⎇the⎇airline⎇reservation⎇computers.⎇ It⎇must⎇somehow⎇treat⎇the⎇information⎇in⎇books,
131⎇⎇␈F1perhaps⎇by⎇ascribing⎇to⎇them⎇a⎇passive⎇form⎇of⎇knowledge.⎇ The⎇more⎇powerful⎇the⎇program⎇in
132⎇⎇␈F1interpreting⎇what⎇it⎇is⎇told,⎇the⎇less⎇it⎇has⎇to⎇know⎇about⎇how⎇the⎇information⎇it⎇can⎇receive⎇is
133⎇⎇␈F1represented⎇internally⎇in⎇the⎇teller⎇and⎇the⎇more⎇its⎇ascriptions⎇of⎇knowledge⎇will⎇look⎇like⎇human
134⎇⎇␈F1ascriptions⎇of⎇knowledge⎇to⎇other⎇humans.
135⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16409⎇␈←
136⎇⎇␈F3TWO⎇METHODS⎇OF⎇DEFINITION⎇AND⎇THEIR⎇APPLICATION⎇TO⎇MENTAL
137⎇⎇␈F3QUALITIES
138⎇⎇␈F1␈=80⎇In⎇our⎇opinion,⎇a⎇major⎇source⎇of⎇problems⎇in⎇defining⎇mental⎇and⎇intensional⎇concepts⎇is
139⎇⎇␈F1the⎇weakness⎇of⎇the⎇methods⎇of⎇definition⎇that⎇have⎇been⎇␈F2explicitly␈F1⎇used.⎇ We⎇introduce⎇two⎇kinds
140⎇⎇␈F1of⎇definition:⎇␈F2definition⎇relative⎇to⎇an⎇approximate⎇theory␈F1⎇and⎇␈F2second⎇order⎇structural⎇definition␈F1
141⎇⎇␈F1and⎇apply⎇them⎇to⎇defining⎇mental⎇qualities.
142⎇⎇␈F11.⎇␈F3Definitions⎇relative⎇to⎇an⎇approximate⎇theory␈F1.
143⎇⎇␈F1␈=80⎇It⎇is⎇commonplace⎇that⎇most⎇scientific⎇concepts⎇are⎇not⎇defined⎇by⎇isolated⎇sentences⎇of
144⎇⎇␈F1natural⎇languages⎇but⎇rather⎇as⎇parts⎇of⎇theories,⎇and⎇the⎇acceptance⎇of⎇the⎇theory⎇is⎇determined
145⎇⎇␈F1by⎇its⎇fit⎇to⎇a⎇large⎇collection⎇of⎇phenomena.⎇ We⎇propose⎇a⎇similar⎇method⎇for⎇explicating⎇mental
146⎇⎇␈F1and⎇other⎇common⎇sense⎇concepts,⎇but⎇a⎇certain⎇phenomenon⎇plays⎇a⎇more⎇important⎇role⎇than
147⎇⎇␈F1with⎇scientific⎇theories:⎇the⎇concept⎇is⎇meaningful⎇only⎇in⎇the⎇theory,⎇and⎇cannot⎇be⎇defined⎇with
148⎇⎇␈F1more⎇precision⎇than⎇the⎇theory⎇permits.
149⎇⎇␈F1␈=80⎇The⎇notion⎇of⎇one⎇theory⎇approximating⎇another⎇needs⎇to⎇be⎇formalized.⎇ In⎇the⎇case⎇of
150⎇⎇␈F1physics,⎇one⎇can⎇think⎇of⎇various⎇kinds⎇of⎇numerical⎇or⎇probabilistic⎇approximation.⎇ I⎇think⎇this
151⎇⎇␈F1kind⎇of⎇approximation⎇is⎇untypical⎇and⎇misleading⎇and⎇won't⎇help⎇explicate⎇such⎇concepts⎇as
152⎇⎇␈F1␈F2intentional⎇action␈F1⎇as⎇meaningful⎇in⎇approximate⎇theories.⎇ Instead⎇it⎇may⎇go⎇something⎇like⎇this:
153⎇⎇␈F1␈=80⎇Consider⎇a⎇detailed⎇theory⎇␈F2T␈F1⎇that⎇has⎇a⎇state⎇variable⎇␈F2s.␈F1⎇We⎇may⎇imagine⎇that⎇␈F2s␈F1⎇changes
154⎇⎇␈F1with⎇time.⎇ The⎇approximating⎇theory⎇␈F2T'␈F1⎇has⎇a⎇state⎇variable⎇␈F2s'.␈F1⎇There⎇is⎇a⎇predicate⎇␈F2atp(s,T')␈F1
155⎇⎇␈F1whose⎇truth⎇means⎇that⎇␈F2T'␈F1⎇is⎇applicable⎇when⎇the⎇world⎇is⎇in⎇state⎇␈F2s.␈F1⎇There⎇is⎇a⎇relation⎇␈F2corr(s,s')␈F1
156⎇⎇␈F1which⎇asserts⎇that⎇␈F2s'␈F1⎇corresponds⎇to⎇the⎇state⎇␈F2s.⎇We␈F1⎇have
157⎇⎇␈F11)␈=112⎇⎇␈F2∀s.(atp(s,T')⎇⊃⎇∃s'.corr(s,s'))␈F1.
158⎇⎇␈F1Certain⎇functions⎇␈F2f1(s),⎇f2(s),␈F1⎇etc.⎇have⎇corresponding⎇functions⎇␈F2f1'(s'),⎇f2'(s')␈F1,⎇etc.⎇ We⎇have
159⎇⎇␈F1relations⎇like
160⎇⎇␈F12)␈=112⎇⎇␈F2∀s⎇s'.(corr(s,s')⎇⊃⎇f1(s)⎇=⎇f1'(s'))␈F1.
161⎇⎇␈F1However,⎇the⎇approximate⎇theory⎇␈F2T'␈F1⎇may⎇have⎇additional⎇functions⎇␈F2g1'(s')␈F1,⎇etc.⎇that⎇do⎇not
162⎇⎇␈F1correspond⎇to⎇any⎇functions⎇of⎇␈F2s.␈F1⎇Even⎇when⎇it⎇is⎇possible⎇to⎇construct⎇␈F2g␈F1s⎇corresponding⎇to⎇the
163⎇⎇␈F1␈F2g'␈F1s,⎇their⎇definitions⎇will⎇often⎇seem⎇arbitrary,⎇because⎇the⎇common⎇sense⎇user⎇of⎇␈F2g1'␈F1⎇will⎇only
164⎇⎇␈F1have⎇used⎇it⎇within⎇the⎇context⎇of⎇␈F2T'␈F1
165⎇⎇␈F1Concepts⎇whose⎇definition⎇involves⎇counterfactuals⎇provide⎇examples.
166⎇⎇␈F1␈=80⎇Suppose⎇we⎇want⎇to⎇ascribe⎇␈F2intentions␈F1⎇and⎇␈F2free⎇will␈F1⎇and⎇to⎇distinguish⎇a⎇␈F2deliberate⎇action␈F1
167⎇⎇␈F1from⎇an⎇occurrence.⎇ We⎇want⎇to⎇call⎇an⎇output⎇a⎇␈F2deliberate⎇action␈F1⎇if⎇the⎇output⎇would⎇have⎇been
168⎇⎇␈F1different⎇if⎇the⎇machine's⎇intentions⎇had⎇been⎇different.⎇ This⎇requires⎇a⎇criterion⎇for⎇the⎇truth⎇of
169⎇⎇␈F1the⎇counterfactual⎇conditional⎇sentence⎇␈F2If⎇its⎇intentions⎇had⎇been⎇different⎇the⎇output⎇wouldn't
170⎇⎇␈F2have⎇occurred␈F1,⎇and⎇we⎇require⎇what⎇seems⎇to⎇be⎇a⎇novel⎇treatment⎇of⎇counterfactuals.
171⎇⎇␈F1␈=80⎇We⎇treat⎇the⎇"relevant⎇aspect⎇of⎇reality"⎇as⎇a⎇Cartesian⎇product⎇so⎇that⎇we⎇can⎇talk⎇about
172⎇⎇␈F1changing⎇one⎇component⎇and⎇leaving⎇the⎇others⎇unchanged.⎇ This⎇would⎇be⎇straightforward⎇if⎇the
173⎇⎇␈F1Cartesian⎇product⎇structure⎇existed⎇in⎇the⎇world;⎇however,⎇it⎇usually⎇exists⎇only⎇in⎇certain
174⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16410⎇␈←
175⎇⎇␈F1approximate⎇models⎇of⎇the⎇world.⎇ Consequently⎇no⎇single⎇definite⎇state⎇of⎇the⎇world⎇as⎇a⎇whole
176⎇⎇␈F1corresponds⎇to⎇changing⎇one⎇component.⎇ The⎇following⎇paragraphs⎇present⎇these⎇ideas⎇in⎇greater
177⎇⎇␈F1detail.
178⎇⎇␈F1␈=80⎇Suppose⎇␈F2A␈F1⎇is⎇a⎇theory⎇in⎇which⎇some⎇aspect⎇of⎇reality⎇is⎇characterized⎇by⎇the⎇values⎇of⎇three
179⎇⎇␈F1quantities⎇␈F2x,␈F1⎇␈F2y␈F1⎇and⎇␈F2z␈F1.⎇ Let⎇␈F2f␈F1⎇be⎇a⎇function⎇of⎇three⎇arguments,⎇let⎇␈F2u␈F1⎇be⎇a⎇quantity⎇satisfying
180⎇⎇␈F1␈F2u = f(x,y,z)␈F1,⎇where⎇␈F2f(1,1,1) = 3␈F1⎇and⎇␈F2f(2,1,1) = 5␈F1.⎇ Consider⎇a⎇state⎇of⎇the⎇model⎇in⎇which⎇␈F2x = 1␈F1,
181⎇⎇␈F1␈F2y = 1␈F1⎇and⎇␈F2z = 1␈F1.⎇ Within⎇the⎇theory⎇␈F2A,␈F1⎇the⎇counterfactual⎇conditional⎇sentence⎇␈F2"u = 3,⎇but⎇if⎇x
182⎇⎇␈F2were⎇2,⎇then⎇u⎇would⎇be⎇5"␈F1⎇is⎇true,⎇because⎇the⎇counterfactual⎇condition⎇means⎇changing⎇␈F2x␈F1⎇to⎇2
183⎇⎇␈F1and⎇leaving⎇the⎇other⎇variables⎇unchanged.
184⎇⎇␈F1␈=80⎇Now⎇let's⎇go⎇beyond⎇the⎇model⎇and⎇suppose⎇that⎇␈F2x,␈F1⎇␈F2y␈F1⎇and⎇␈F2z␈F1⎇are⎇quantities⎇depending⎇on⎇the
185⎇⎇␈F1state⎇of⎇the⎇world.⎇ Even⎇if⎇␈F2u = f(x,y,z)␈F1⎇is⎇taken⎇as⎇a⎇law⎇of⎇nature,⎇the⎇counterfactual⎇need⎇not⎇be
186⎇⎇␈F1taken⎇as⎇true,⎇because⎇someone⎇might⎇argue⎇that⎇if⎇␈F2x␈F1⎇were⎇2,⎇then⎇␈F2y␈F1⎇would⎇be⎇3⎇so⎇that⎇␈F2u␈F1⎇might
187⎇⎇␈F1not⎇be⎇5.⎇ If⎇the⎇theory⎇␈F2A␈F1⎇has⎇a⎇sufficiently⎇preferred⎇status⎇we⎇may⎇take⎇the⎇meaning⎇of⎇the
188⎇⎇␈F1counterfactual⎇in⎇␈F2A␈F1⎇to⎇be⎇its⎇general⎇meaning,⎇but⎇it⎇may⎇sometimes⎇be⎇better⎇to⎇consider⎇the
189⎇⎇␈F1counterfactual⎇as⎇defined⎇solely⎇in⎇the⎇theory,⎇i.e.⎇as⎇␈F2syncategorematic␈F1.
190⎇⎇␈F1␈=80⎇A⎇common⎇sense⎇example⎇may⎇be⎇helpful:⎇Suppose⎇a⎇ski⎇instructor⎇says,⎇␈F2"He⎇wouldn't⎇have
191⎇⎇␈F2fallen⎇if⎇he⎇had⎇bent⎇his⎇knees⎇when⎇he⎇made⎇that⎇turn"␈F1,⎇and⎇another⎇instructor⎇replies,⎇␈F2"No,⎇the
192⎇⎇␈F2reason⎇he⎇fell⎇was⎇that⎇he⎇didn't⎇put⎇his⎇weight⎇on⎇his⎇downhill⎇ski"␈F1.⎇ Suppose⎇further⎇that⎇on
193⎇⎇␈F1reviewing⎇a⎇film,⎇they⎇agree⎇that⎇the⎇first⎇instructor⎇was⎇correct⎇and⎇the⎇second⎇mistaken.⎇ I
194⎇⎇␈F1contend⎇that⎇this⎇agreement⎇is⎇based⎇on⎇their⎇common⎇acceptance⎇of⎇a⎇theory⎇of⎇skiing,⎇and⎇that
195⎇⎇␈F1␈F2within⎇the⎇theory␈F1,⎇the⎇decision⎇may⎇well⎇be⎇rigorous⎇even⎇though⎇no-one⎇bothers⎇to⎇imagine⎇an
196⎇⎇␈F1alternate⎇world⎇as⎇much⎇like⎇the⎇real⎇world⎇as⎇possible⎇but⎇in⎇which⎇the⎇student⎇had⎇put⎇his
197⎇⎇␈F1weight⎇on⎇his⎇downhill⎇ski.
198⎇⎇␈F1␈=80⎇We⎇suggest⎇that⎇this⎇is⎇often⎇(I⎇haven't⎇yet⎇looked⎇for⎇counter-examples)⎇the⎇common⎇sense
199⎇⎇␈F1meaning⎇of⎇a⎇counterfactual.⎇ The⎇counterfactual⎇has⎇a⎇definite⎇meaning⎇in⎇a⎇theory,⎇because⎇the
200⎇⎇␈F1theory⎇has⎇a⎇Cartesian⎇product⎇structure,⎇and⎇the⎇theory⎇is⎇sufficiently⎇preferred⎇that⎇the⎇meaning
201⎇⎇␈F1of⎇the⎇counterfactual⎇in⎇the⎇world⎇is⎇taken⎇as⎇its⎇meaning⎇in⎇the⎇theory.⎇ This⎇is⎇especially⎇likely⎇to
202⎇⎇␈F1be⎇true⎇for⎇concepts⎇that⎇have⎇a⎇natural⎇definition⎇in⎇terms⎇of⎇counterfactuals,⎇e.g.⎇the⎇concept⎇of
203⎇⎇␈F1␈F2deliberate⎇action␈F1⎇with⎇which⎇we⎇started⎇this⎇section.
204⎇⎇␈F1␈=80⎇In⎇all⎇cases⎇that⎇we⎇know⎇about,⎇the⎇theory⎇is⎇approximate⎇and⎇incomplete.⎇ Provided
205⎇⎇␈F1certain⎇propositions⎇are⎇true,⎇a⎇certain⎇quantity⎇is⎇approximately⎇a⎇given⎇function⎇of⎇certain⎇other
206⎇⎇␈F1quantities.⎇ The⎇incompleteness⎇lies⎇in⎇the⎇fact⎇that⎇the⎇theory⎇doesn't⎇predict⎇states⎇of⎇the⎇world
207⎇⎇␈F1but⎇only⎇certain⎇functions⎇of⎇them.⎇ Thus⎇a⎇useful⎇concept⎇like⎇deliberate⎇action⎇may⎇seem⎇to
208⎇⎇␈F1vanish⎇if⎇examined⎇too⎇closely,⎇e.g.⎇when⎇we⎇try⎇to⎇define⎇it⎇in⎇terms⎇of⎇states⎇of⎇the⎇world⎇and⎇not
209⎇⎇␈F1just⎇in⎇terms⎇of⎇certain⎇functions⎇of⎇these⎇states.
210⎇⎇␈F1Remarks:
211⎇⎇␈F1␈=160⎇1.1.⎇The⎇known⎇cases⎇in⎇which⎇a⎇concept⎇is⎇defined⎇relative⎇to⎇an⎇approximate⎇theory
212⎇⎇␈F1involve⎇counterfactuals.⎇ This⎇may⎇not⎇always⎇be⎇the⎇case.
213⎇⎇␈F1␈=160⎇1.2.⎇It⎇is⎇important⎇to⎇study⎇the⎇nature⎇of⎇the⎇approximations.
214⎇⎇␈F1␈=160⎇1.3.⎇(McCarthy⎇and⎇Hayes⎇1969)⎇treats⎇the⎇notion⎇of⎇␈F2X⎇can⎇do⎇Y␈F1⎇using⎇a⎇theory⎇in
215⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16411⎇␈←
216⎇⎇␈F1which⎇the⎇world⎇is⎇regarded⎇as⎇a⎇collection⎇of⎇interacting⎇automata.⎇ That⎇paper⎇failed⎇to⎇note
217⎇⎇␈F1that⎇sentences⎇using⎇␈F2can␈F1⎇cannot⎇necessarily⎇be⎇translated⎇into⎇single⎇assertions⎇about⎇the⎇world.
218⎇⎇␈F1␈=160⎇1.4.⎇The⎇attempt⎇by⎇old⎇fashioned⎇introspective⎇psychology⎇to⎇analyze⎇the⎇mind⎇into
219⎇⎇␈F1an⎇interacting⎇␈F2will,␈F1⎇␈F2intellect␈F1⎇and⎇other⎇components⎇cannot⎇be⎇excluded⎇on⎇the⎇methodological
220⎇⎇␈F1grounds⎇used⎇by⎇behaviorists⎇and⎇postitivists⎇to⎇declare⎇them⎇meaningless⎇and⎇exclude⎇them⎇from
221⎇⎇␈F1science.⎇ These⎇concepts⎇might⎇have⎇precise⎇definitions⎇within⎇a⎇suitable⎇approximate⎇theory.␈F55␈F1
222⎇⎇␈F1␈=160⎇1.5.⎇The⎇above⎇treatment⎇of⎇counterfactuals⎇in⎇which⎇they⎇are⎇defined⎇in⎇terms⎇of⎇the
223⎇⎇␈F1Cartesian⎇product⎇structure⎇of⎇an⎇approximate⎇theory⎇may⎇be⎇better⎇than⎇the⎇␈F2closest⎇possible⎇world␈F1
224⎇⎇␈F1treatments⎇discussed⎇in⎇(Lewis⎇1973).⎇ The⎇truth-values⎇are⎇well⎇defined⎇within⎇the⎇approximate
225⎇⎇␈F1theories,⎇and⎇the⎇theories⎇can⎇be⎇justified⎇by⎇evidence⎇involving⎇phenomena⎇not⎇mentioned⎇in
226⎇⎇␈F1isolated⎇counterfactual⎇assertions.
227⎇⎇␈F1␈=160⎇1.6.⎇Definition⎇relative⎇to⎇approximate⎇theories⎇may⎇help⎇separate⎇questions,⎇such⎇as
228⎇⎇␈F1some⎇of⎇those⎇concerning⎇counterfactuals,⎇into⎇␈F2internal␈F1⎇questions⎇within⎇the⎇approximate⎇theory
229⎇⎇␈F1and⎇the⎇␈F2external␈F1⎇question⎇of⎇the⎇justification⎇of⎇the⎇theory⎇as⎇a⎇whole.⎇ The⎇internal⎇questions⎇are
230⎇⎇␈F1likely⎇to⎇be⎇technical⎇and⎇have⎇definite⎇answers⎇on⎇which⎇people⎇can⎇agree⎇even⎇if⎇they⎇have
231⎇⎇␈F1philosophical⎇or⎇scientific⎇disagreements⎇about⎇the⎇external⎇questions.
232⎇⎇␈F12.⎇␈F3Second⎇Order⎇Structural⎇Definition.␈F1
233⎇⎇␈F1␈=80⎇Structural⎇definitions⎇of⎇qualities⎇are⎇given⎇in⎇terms⎇of⎇the⎇state⎇of⎇the⎇system⎇being
234⎇⎇␈F1described⎇while⎇behavioral⎇definitions⎇are⎇given⎇in⎇terms⎇of⎇its⎇actual⎇or⎇potential⎇behavior␈F56␈F1.
235⎇⎇␈F1␈=80⎇If⎇the⎇structure⎇of⎇the⎇machine⎇is⎇known,⎇one⎇can⎇give⎇an⎇ad⎇hoc⎇␈F2first⎇order⎇structural
236⎇⎇␈F2definition␈F1.⎇ This⎇is⎇a⎇predicate⎇␈F2B(s,p)␈F1⎇where⎇␈F2s␈F1⎇represents⎇a⎇state⎇of⎇the⎇machine⎇and⎇␈F2p␈F1⎇represents⎇a
237⎇⎇␈F1sentence⎇in⎇a⎇suitable⎇language,⎇and⎇␈F2B(s,p)␈F1⎇is⎇the⎇assertion⎇that⎇when⎇the⎇machine⎇is⎇in⎇state⎇␈F2s,␈F1⎇it
238⎇⎇␈F1␈F2believes␈F1⎇the⎇sentence⎇␈F2p.␈F1⎇(The⎇considerations⎇of⎇this⎇paper⎇are⎇neutral⎇in⎇deciding⎇whether⎇to
239⎇⎇␈F1regard⎇the⎇object⎇of⎇belief⎇as⎇a⎇sentence⎇or⎇to⎇use⎇a⎇modal⎇operator⎇or⎇to⎇admit⎇␈F2propositions␈F1⎇as
240⎇⎇␈F1abstract⎇objects⎇that⎇can⎇be⎇believed.⎇ The⎇paper⎇is⎇written⎇as⎇though⎇sentences⎇are⎇the⎇objects⎇of
241⎇⎇␈F1belief,⎇but⎇I⎇have⎇more⎇recently⎇come⎇to⎇favor⎇propositions⎇and⎇discuss⎇them⎇in⎇(McCarthy⎇1977a).
242⎇⎇␈F1␈=80⎇A⎇general⎇␈F2first␈F1⎇␈F2order␈F1⎇structural⎇definition⎇of⎇belief⎇would⎇be⎇a⎇predicate⎇␈F2B(W,M,s,p)␈F1
243⎇⎇␈F1where⎇␈F2W␈F1⎇is⎇the⎇"world"⎇in⎇which⎇the⎇machine⎇␈F2M␈F1⎇whose⎇beliefs⎇are⎇in⎇question⎇is⎇situated.⎇ I⎇do
244⎇⎇␈F1not⎇see⎇how⎇to⎇give⎇such⎇a⎇definition⎇of⎇belief,⎇and⎇I⎇think⎇it⎇is⎇impossible.⎇ Therefore⎇we⎇turn⎇to
245⎇⎇␈F1second⎇order⎇definitions␈F57␈F1.
246⎇⎇␈F1␈=80⎇A⎇second⎇order⎇structural⎇definition⎇of⎇belief⎇is⎇a⎇second⎇order⎇predicate⎇␈F2β(W,M,B).␈F1
247⎇⎇␈F1␈F2β(W,M,B)␈F1⎇asserts⎇that⎇the⎇first⎇order⎇predicate⎇␈F2B␈F1⎇is⎇a⎇"good"⎇notion⎇of⎇belief⎇for⎇the⎇machine⎇␈F2M␈F1
248⎇⎇␈F1in⎇the⎇world⎇␈F2W.␈F1⎇Here⎇"good"⎇means⎇that⎇the⎇beliefs⎇that⎇␈F2B␈F1⎇ascribes⎇to⎇␈F2M␈F1⎇agree⎇with⎇our⎇ideas⎇of
249⎇⎇␈F1what⎇beliefs⎇␈F2M␈F1⎇would⎇have,⎇not⎇that⎇the⎇beliefs⎇themselves⎇are⎇true.⎇ The⎇axiomatizations⎇of
250⎇⎇␈F1belief⎇in⎇the⎇literature⎇are⎇partial⎇second⎇order⎇definitions.
251⎇⎇␈F1␈=80⎇In⎇general,⎇␈F3a⎇second⎇order⎇definition⎇gives⎇criteria⎇for⎇criticizing⎇an⎇ascription⎇of⎇a
252⎇⎇␈F3quality⎇to⎇a⎇system.␈F1⎇We⎇suggest⎇that⎇both⎇our⎇common⎇sense⎇and⎇scientific⎇usage⎇of⎇not-directly-
253⎇⎇␈F1observable⎇qualities⎇corresponds⎇more⎇losely⎇to⎇second⎇order⎇structural⎇definition⎇than⎇to⎇any⎇kind
254⎇⎇␈F1of⎇behavioral⎇definition.⎇ Note⎇that⎇a⎇second⎇order⎇definition⎇cannot⎇guarantee⎇that⎇there⎇exist
255⎇⎇␈F1predicates⎇␈F2B␈F1⎇meeting⎇the⎇criterion⎇β⎇or⎇that⎇such⎇a⎇␈F2B␈F1⎇is⎇unique.⎇ Some⎇qualities⎇are⎇best⎇defined
256⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16412⎇␈←
257⎇⎇␈F1jointly⎇with⎇related⎇qualities,⎇e.g.⎇beliefs⎇and⎇goals⎇may⎇require⎇joint⎇treatment.
258⎇⎇␈F1␈=80⎇Second⎇order⎇definitions⎇criticize⎇whole⎇belief⎇structures⎇rather⎇than⎇individual⎇beliefs.⎇ We
259⎇⎇␈F1can⎇treat⎇individual⎇beliefs⎇by⎇saying⎇that⎇a⎇system⎇believes⎇␈F2p␈F1⎇in⎇state⎇␈F2s␈F1⎇provided⎇all⎇"reasonably
260⎇⎇␈F1good"⎇␈F2B␈F1's⎇satisfy⎇␈F2B(s,p)␈F1.⎇ Thus⎇we⎇are⎇distinguishing⎇the⎇"intersection"⎇of⎇the⎇reasonably⎇good⎇␈F2B␈F1's.
261⎇⎇␈F1␈=80⎇(An⎇analogy⎇with⎇cryptography⎇may⎇be⎇helpful.⎇ We⎇solve⎇a⎇cryptogram⎇by⎇making
262⎇⎇␈F1hypotheses⎇about⎇the⎇structure⎇of⎇the⎇cipher⎇and⎇about⎇the⎇translation⎇of⎇parts⎇of⎇the⎇cipher⎇text.
263⎇⎇␈F1Our⎇solution⎇is⎇complete⎇when⎇we⎇have⎇"guessed"⎇a⎇cipher⎇system⎇that⎇produces⎇the⎇cryptogram
264⎇⎇␈F1from⎇a⎇plausible⎇plaintext⎇message.⎇ Though⎇we⎇never⎇prove⎇that⎇our⎇solution⎇is⎇unique,⎇two
265⎇⎇␈F1different⎇solutions⎇are⎇almost⎇never⎇found⎇except⎇for⎇very⎇short⎇cryptograms.⎇ In⎇the⎇analogy,⎇the
266⎇⎇␈F1second⎇order⎇definition⎇β⎇corresponds⎇to⎇the⎇general⎇idea⎇of⎇encipherment,⎇and⎇␈F2B␈F1⎇is⎇the⎇particular
267⎇⎇␈F1system⎇used.⎇ While⎇we⎇will⎇rarely⎇be⎇able⎇to⎇prove⎇uniqueness,⎇we⎇don't⎇expect⎇to⎇find⎇two⎇␈F2B␈F1s
268⎇⎇␈F1both⎇satisfying⎇β).
269⎇⎇␈F1␈=80⎇It⎇seems⎇to⎇me⎇that⎇there⎇should⎇be⎇a⎇metatheorem⎇of⎇mathematical⎇logic⎇asserting⎇that⎇not
270⎇⎇␈F1all⎇second⎇order⎇definitions⎇can⎇be⎇reduced⎇to⎇first⎇order⎇definitions⎇and⎇further⎇theorems
271⎇⎇␈F1characterizing⎇those⎇second⎇order⎇definitions⎇that⎇admit⎇such⎇reductions.⎇ Such⎇technical⎇results,⎇if
272⎇⎇␈F1they⎇can⎇be⎇found,⎇may⎇be⎇helpful⎇in⎇philosophy⎇and⎇in⎇the⎇construction⎇of⎇formal⎇scientific
273⎇⎇␈F1theories.⎇ I⎇would⎇conjecture⎇that⎇many⎇of⎇the⎇informal⎇philosophical⎇arguments⎇that⎇certain
274⎇⎇␈F1mental⎇concepts⎇cannot⎇be⎇reduced⎇to⎇physics⎇will⎇turn⎇out⎇to⎇be⎇sketches⎇of⎇arguments⎇that⎇these
275⎇⎇␈F1concepts⎇require⎇second⎇(or⎇higher)⎇order⎇definitions.
276⎇⎇␈F1␈=80⎇Here⎇is⎇an⎇approximate⎇second⎇order⎇definition⎇of⎇belief.⎇ For⎇each⎇state⎇␈F2s␈F1⎇of⎇the⎇machine
277⎇⎇␈F1and⎇each⎇sentence⎇␈F2p␈F1⎇in⎇a⎇suitable⎇language⎇␈F2L,␈F1⎇we⎇assign⎇truth⎇to⎇␈F2B(s,p)␈F1⎇if⎇and⎇only⎇if⎇the
278⎇⎇␈F1machine⎇is⎇considered⎇to⎇believe⎇␈F2p␈F1⎇when⎇it⎇is⎇in⎇state⎇␈F2s␈F1.⎇ The⎇language⎇␈F2L␈F1⎇is⎇chosen⎇for⎇our
279⎇⎇␈F1convenience,⎇and⎇there⎇is⎇no⎇assumption⎇that⎇the⎇machine⎇explicitly⎇represents⎇sentences⎇of⎇␈F2L␈F1⎇in
280⎇⎇␈F1any⎇way.⎇ Thus⎇we⎇can⎇talk⎇about⎇the⎇beliefs⎇of⎇Chinese,⎇dogs,⎇corporations,⎇thermostats,⎇and
281⎇⎇␈F1computer⎇operating⎇systems⎇without⎇assuming⎇that⎇they⎇use⎇English⎇or⎇our⎇favorite⎇first⎇order
282⎇⎇␈F1language.⎇ ␈F2L␈F1⎇may⎇or⎇may⎇not⎇be⎇the⎇language⎇be⎇the⎇language⎇we⎇are⎇using⎇for⎇making⎇other
283⎇⎇␈F1assertions,⎇e.g.⎇we⎇could,⎇writing⎇in⎇English,⎇systematically⎇use⎇French⎇sentences⎇as⎇objects⎇of⎇belief.
284⎇⎇␈F1However,⎇the⎇best⎇choice⎇for⎇artificial⎇intelligence⎇work⎇may⎇be⎇to⎇make⎇␈F2L␈F1⎇a⎇subset⎇of⎇our⎇"outer"
285⎇⎇␈F1language⎇restricted⎇so⎇as⎇to⎇avoid⎇the⎇paradoxical⎇self-references⎇of⎇(Montague⎇1963).
286⎇⎇␈F1␈=80⎇We⎇now⎇subject⎇␈F2B(s,p)␈F1⎇to⎇certain⎇criteria;⎇i.e.⎇β␈F2(B,W)␈F1⎇is⎇considered⎇true⎇provided⎇the
287⎇⎇␈F1following⎇conditions⎇are⎇satisfied:
288⎇⎇␈F1␈=160⎇2.1.⎇The⎇set⎇␈F2Bel(s)␈F1⎇of⎇beliefs,⎇i.e.⎇the⎇set⎇of⎇␈F2p␈F1's⎇for⎇which⎇␈F2B(s,p)␈F1⎇is⎇assigned⎇true⎇when
289⎇⎇␈F1␈F2M␈F1⎇is⎇in⎇state⎇␈F2s␈F1⎇contains⎇sufficiently⎇"obvious"⎇consequences⎇of⎇some⎇of⎇its⎇members.
290⎇⎇␈F1␈=160⎇2.2.⎇ ␈F2Bel(s)␈F1⎇changes⎇in⎇a⎇reasonable⎇way⎇when⎇the⎇state⎇changes⎇in⎇time.⎇ We⎇like⎇new
291⎇⎇␈F1beliefs⎇to⎇be⎇logical⎇or⎇"plausible"⎇consequences⎇of⎇old⎇ones⎇or⎇to⎇come⎇in⎇as⎇␈F2communications␈F1⎇in
292⎇⎇␈F1some⎇language⎇on⎇the⎇input⎇lines⎇or⎇to⎇be⎇␈F2observations␈F1,⎇i.e.⎇ beliefs⎇about⎇the⎇environment⎇the
293⎇⎇␈F1information⎇for⎇which⎇comes⎇in⎇on⎇the⎇input⎇lines.⎇ The⎇set⎇of⎇beliefs⎇should⎇not⎇change⎇too
294⎇⎇␈F1rapidly⎇as⎇the⎇state⎇changes⎇with⎇time.
295⎇⎇␈F1␈=160⎇2.3.⎇ We⎇prefer⎇the⎇set⎇of⎇beliefs⎇to⎇be⎇as⎇consistent⎇as⎇possible.⎇ (Admittedly,
296⎇⎇␈F1consistency⎇is⎇not⎇a⎇quantitative⎇concept⎇in⎇mathematical⎇logic⎇-⎇a⎇system⎇is⎇either⎇consistent⎇or
297⎇⎇␈F1not,⎇but⎇it⎇would⎇seem⎇that⎇we⎇will⎇sometimes⎇have⎇to⎇ascribe⎇inconsistent⎇sets⎇of⎇beliefs⎇to
298⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16413⎇␈←
299⎇⎇␈F1machines⎇and⎇people.⎇ Our⎇intuition⎇says⎇that⎇we⎇should⎇be⎇able⎇to⎇maintain⎇areas⎇of⎇consistency
300⎇⎇␈F1in⎇our⎇beliefs⎇and⎇that⎇it⎇may⎇be⎇especially⎇important⎇to⎇avoid⎇inconsistencies⎇in⎇the⎇machine's
301⎇⎇␈F1purely⎇analytic⎇beliefs).
302⎇⎇␈F1␈=160⎇2.4.⎇ Our⎇criteria⎇for⎇belief⎇systems⎇can⎇be⎇strengthened⎇if⎇we⎇identify⎇some⎇of⎇the
303⎇⎇␈F1machine's⎇beliefs⎇as⎇expressing⎇goals,⎇i.e.⎇if⎇we⎇have⎇beliefs⎇of⎇the⎇form⎇"It⎇would⎇be⎇good⎇if⎇...".
304⎇⎇␈F1Then⎇we⎇can⎇ask⎇that⎇the⎇machine's⎇behavior⎇be⎇somewhat⎇␈F2rational␈F1,⎇i.e.⎇ ␈F2it⎇does⎇what⎇it⎇believes
305⎇⎇␈F2will⎇achieve⎇its⎇goals␈F1.⎇The⎇more⎇of⎇its⎇behavior⎇we⎇can⎇account⎇for⎇in⎇this⎇way,⎇the⎇better⎇we⎇will
306⎇⎇␈F1like⎇the⎇function⎇␈F2B(s,p)␈F1.⎇ We⎇also⎇would⎇like⎇to⎇regard⎇internal⎇state⎇changes⎇as⎇changes⎇in⎇belief
307⎇⎇␈F1in⎇so⎇far⎇as⎇this⎇is⎇reasonable.
308⎇⎇␈F1␈=160⎇2.5.⎇ If⎇the⎇machine⎇communicates,⎇i.e.⎇emits⎇sentences⎇in⎇some⎇language⎇that⎇can⎇be
309⎇⎇␈F1interpreted⎇as⎇assertions,⎇questions⎇and⎇commands,⎇we⎇will⎇want⎇the⎇assertions⎇to⎇be⎇among⎇its
310⎇⎇␈F1beliefs⎇unless⎇we⎇are⎇ascribing⎇to⎇it⎇a⎇goal⎇or⎇subgoal⎇that⎇involves⎇lying.⎇ We⎇will⎇be⎇most
311⎇⎇␈F1satisfied⎇with⎇our⎇belief⎇ascription,⎇if⎇we⎇can⎇account⎇for⎇its⎇communications⎇as⎇furthering⎇the
312⎇⎇␈F1goals⎇we⎇are⎇ascribing.
313⎇⎇␈F1␈=160⎇2.6.⎇ Sometimes⎇we⎇shall⎇want⎇to⎇ascribe⎇introspective⎇beliefs,⎇e.g.⎇a⎇belief⎇that⎇it⎇does
314⎇⎇␈F1not⎇know⎇how⎇to⎇fly⎇to⎇Boston⎇or⎇even⎇that⎇it⎇doesn't⎇know⎇what⎇it⎇wants⎇in⎇a⎇certain⎇situation.
315⎇⎇␈F1␈=160⎇2.7.⎇Finally,⎇we⎇will⎇prefer⎇a⎇more⎇economical⎇ascription⎇␈F2B␈F1⎇to⎇a⎇less⎇economical⎇one.
316⎇⎇␈F1The⎇fewer⎇beliefs⎇we⎇ascribe⎇and⎇the⎇less⎇they⎇change⎇with⎇state⎇consistent⎇with⎇accounting⎇for⎇the
317⎇⎇␈F1behavior⎇and⎇the⎇internal⎇state⎇changes,⎇the⎇better⎇we⎇will⎇like⎇it.⎇ In⎇particular,⎇if
318⎇⎇␈F1␈F2∀s p.(B1(s,p) ⊃ B2(s,p))␈F1,⎇but⎇not⎇conversely,⎇and⎇␈F2B1␈F1⎇accounts⎇for⎇all⎇the⎇state⎇changes⎇and
319⎇⎇␈F1outputs⎇that⎇␈F2B2␈F1⎇does,⎇we⎇will⎇prefer⎇␈F2B1␈F1⎇to⎇␈F2B2.␈F1⎇This⎇insures⎇that⎇we⎇will⎇prefer⎇to⎇assign⎇no⎇beliefs
320⎇⎇␈F1to⎇stones⎇that⎇don't⎇change⎇and⎇don't⎇behave.⎇ A⎇belief⎇predicate⎇that⎇applies⎇to⎇a⎇family⎇of
321⎇⎇␈F1machines⎇is⎇preferable⎇to⎇one⎇that⎇applies⎇to⎇a⎇single⎇machine.
322⎇⎇␈F1␈=80⎇The⎇above⎇criteria⎇have⎇been⎇formulated⎇somewhat⎇vaguely.⎇ This⎇would⎇be⎇bad⎇if⎇there
323⎇⎇␈F1were⎇widely⎇different⎇ascriptions⎇of⎇beliefs⎇to⎇a⎇particular⎇machine⎇that⎇all⎇met⎇our⎇criteria⎇or⎇if
324⎇⎇␈F1the⎇criteria⎇allowed⎇ascriptions⎇that⎇differed⎇widely⎇from⎇our⎇intuitions.⎇ My⎇present⎇opinion⎇is
325⎇⎇␈F1that⎇more⎇thought⎇will⎇make⎇the⎇criteria⎇somewhat⎇more⎇precise⎇at⎇no⎇cost⎇in⎇applicability,⎇but
326⎇⎇␈F1that⎇they⎇␈F2should␈F1⎇still⎇remain⎇rather⎇vague,⎇i.e.⎇we⎇shall⎇want⎇to⎇ascribe⎇belief⎇in⎇a⎇␈F2family␈F1⎇of⎇cases.
327⎇⎇␈F1However,⎇even⎇at⎇the⎇present⎇level⎇of⎇vagueness,⎇there⎇probably⎇won't⎇be⎇radically⎇different
328⎇⎇␈F1equally⎇"good"⎇ascriptions⎇of⎇belief⎇for⎇systems⎇of⎇practical⎇interest.⎇ If⎇there⎇were,⎇we⎇would⎇notice
329⎇⎇␈F1unresolvable⎇ambiguities⎇in⎇our⎇ascriptions⎇of⎇belief⎇to⎇our⎇acquaintances.
330⎇⎇␈F1␈=80⎇While⎇we⎇may⎇not⎇want⎇to⎇pin⎇down⎇our⎇general⎇idea⎇of⎇belief⎇to⎇a⎇single⎇axiomatization,
331⎇⎇␈F1we⎇will⎇need⎇to⎇build⎇precise⎇axiomatizations⎇of⎇belief⎇and⎇other⎇mental⎇qualities⎇into⎇particular
332⎇⎇␈F1intelligent⎇computer⎇programs.
333⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16414⎇␈←
334⎇⎇␈F3␈→488⎇0⎇2⎇0⎇⎇EXAMPLES OF SYSTEMS WITH MENTAL QUALITIES␈←
335⎇⎇␈F1␈=80⎇Let⎇us⎇consider⎇some⎇examples⎇of⎇machines⎇and⎇programs⎇to⎇which⎇we⎇may⎇ascribe⎇belief
336⎇⎇␈F1and⎇goal⎇structures.
337⎇⎇␈F1␈=80⎇1.⎇ ␈F3Thermostats.␈F1⎇Ascribing⎇beliefs⎇to⎇simple⎇thermostats⎇is⎇unnecessary⎇for⎇the⎇study⎇of
338⎇⎇␈F1thermostats,⎇because⎇their⎇operation⎇can⎇be⎇well⎇understood⎇without⎇it.⎇ However,⎇their⎇very
339⎇⎇␈F1simplicity⎇makes⎇it⎇clearer⎇what⎇is⎇involved⎇in⎇the⎇ascription,⎇and⎇we⎇maintain⎇(partly⎇as⎇a
340⎇⎇␈F1provocation⎇to⎇those⎇who⎇regard⎇attribution⎇of⎇beliefs⎇to⎇machines⎇as⎇mere⎇intellectual⎇sloppiness)
341⎇⎇␈F1that⎇the⎇ascription⎇is⎇legitimate.␈F58␈F1
342⎇⎇␈F1␈=80⎇First⎇consider⎇a⎇simple⎇thermostat⎇that⎇turns⎇off⎇the⎇heat⎇when⎇the⎇temperature⎇is⎇a⎇degree
343⎇⎇␈F1above⎇the⎇temperature⎇set⎇on⎇the⎇thermostat,⎇turns⎇on⎇the⎇heat⎇when⎇the⎇temperature⎇is⎇a⎇degree
344⎇⎇␈F1below⎇the⎇desired⎇temperature,⎇and⎇leaves⎇the⎇heat⎇as⎇is⎇when⎇the⎇temperature⎇is⎇in⎇the⎇two⎇degree
345⎇⎇␈F1range⎇around⎇the⎇desired⎇temperature.⎇The⎇simplest⎇belief⎇predicate⎇␈F2B(s,p)␈F1⎇ascribes⎇belief⎇to⎇only
346⎇⎇␈F1three⎇sentences:⎇"The⎇room⎇is⎇too⎇cold",⎇"The⎇room⎇is⎇too⎇hot",⎇and⎇"The⎇room⎇is⎇OK"⎇-⎇the
347⎇⎇␈F1beliefs⎇being⎇assigned⎇to⎇states⎇of⎇the⎇thermostat⎇in⎇the⎇obvious⎇way.⎇ We⎇ascribe⎇to⎇it⎇the⎇goal,
348⎇⎇␈F1"The⎇room⎇should⎇be⎇ok".⎇ When⎇the⎇thermostat⎇believes⎇the⎇room⎇is⎇too⎇cold⎇or⎇too⎇hot,⎇it⎇sends
349⎇⎇␈F1a⎇message⎇saying⎇so⎇to⎇the⎇furnace.⎇A⎇slightly⎇more⎇complex⎇belief⎇predicate⎇could⎇also⎇be⎇used⎇in
350⎇⎇␈F1which⎇the⎇thermostat⎇has⎇a⎇belief⎇about⎇what⎇the⎇temperature⎇should⎇be⎇and⎇another⎇belief⎇about
351⎇⎇␈F1what⎇it⎇is.⎇ It⎇is⎇not⎇clear⎇which⎇is⎇better,⎇but⎇if⎇we⎇wished⎇to⎇consider⎇possible⎇errors⎇in⎇the
352⎇⎇␈F1thermometer,⎇then⎇we⎇would⎇ascribe⎇beliefs⎇about⎇what⎇the⎇temperature⎇is.⎇We⎇do⎇not⎇ascribe⎇to⎇it
353⎇⎇␈F1any⎇other⎇beliefs;⎇it⎇has⎇no⎇opinion⎇even⎇about⎇whether⎇the⎇heat⎇is⎇on⎇or⎇off⎇or⎇about⎇the⎇weather
354⎇⎇␈F1or⎇about⎇who⎇won⎇the⎇battle⎇of⎇Waterloo.⎇ Moreover,⎇it⎇has⎇no⎇introspective⎇beliefs;⎇i.e.⎇it⎇doesn't
355⎇⎇␈F1believe⎇that⎇it⎇believes⎇the⎇room⎇is⎇too⎇hot.
356⎇⎇␈F1␈=80⎇Let⎇us⎇compare⎇the⎇above⎇␈F2B(s,p)␈F1⎇with⎇the⎇criteria⎇of⎇the⎇previous⎇section.⎇ The⎇belief
357⎇⎇␈F1structure⎇is⎇consistent⎇(because⎇all⎇the⎇beliefs⎇are⎇independent⎇of⎇one⎇another),⎇they⎇arise⎇from
358⎇⎇␈F1observation,⎇and⎇they⎇result⎇in⎇action⎇in⎇accordance⎇with⎇the⎇ascribed⎇goal.⎇ There⎇is⎇no⎇reasoning
359⎇⎇␈F1and⎇only⎇commands⎇(which⎇we⎇have⎇not⎇included⎇in⎇our⎇discussion)⎇are⎇communicated.⎇ Clearly
360⎇⎇␈F1assigning⎇beliefs⎇is⎇of⎇modest⎇intellectual⎇benefit⎇in⎇this⎇case.⎇ However,⎇if⎇we⎇consider⎇the⎇class⎇of
361⎇⎇␈F1possible⎇thermostats,⎇then⎇the⎇ascribed⎇belief⎇structure⎇has⎇greater⎇constancy⎇than⎇the⎇mechanisms
362⎇⎇␈F1for⎇actually⎇measuring⎇and⎇representing⎇the⎇temperature.
363⎇⎇␈F1␈=80⎇The⎇temperature⎇control⎇system⎇in⎇my⎇house⎇may⎇be⎇described⎇as⎇follows:⎇Thermostats
364⎇⎇␈F1upstairs⎇and⎇downstairs⎇tell⎇the⎇central⎇system⎇to⎇turn⎇on⎇or⎇shut⎇off⎇hot⎇water⎇flow⎇to⎇these⎇areas.
365⎇⎇␈F1A⎇central⎇water-temperature⎇thermostat⎇tells⎇the⎇furnace⎇to⎇turn⎇on⎇or⎇off⎇thus⎇keeping⎇the⎇central
366⎇⎇␈F1hot⎇water⎇reservoir⎇at⎇the⎇right⎇temperature.⎇ Recently⎇it⎇was⎇too⎇hot⎇upstairs,⎇and⎇the⎇question
367⎇⎇␈F1arose⎇as⎇to⎇whether⎇the⎇upstairs⎇thermostat⎇mistakenly⎇␈F2believed␈F1⎇it⎇was⎇too⎇cold⎇upstairs⎇or⎇whether
368⎇⎇␈F1the⎇furnace⎇thermostat⎇mistakenly⎇␈F2believed⎇␈F1⎇the⎇water⎇was⎇too⎇cold.⎇ It⎇turned⎇out⎇that⎇neither
369⎇⎇␈F1mistake⎇was⎇made;⎇the⎇downstairs⎇controller⎇␈F2tried␈F1⎇to⎇turn⎇off⎇the⎇flow⎇of⎇water⎇but⎇␈F2couldn't␈F1,
370⎇⎇␈F1because⎇the⎇valve⎇was⎇stuck.⎇ The⎇plumber⎇came⎇once⎇and⎇found⎇the⎇trouble,⎇and⎇came⎇again
371⎇⎇␈F1when⎇a⎇replacement⎇valve⎇was⎇ordered.⎇ Since⎇the⎇services⎇of⎇plumbers⎇are⎇increasingly⎇expensive,
372⎇⎇␈F1and⎇microcomputers⎇are⎇increasingly⎇cheap,⎇one⎇is⎇led⎇to⎇design⎇a⎇temperature⎇control⎇system⎇that
373⎇⎇␈F1would⎇␈F2know␈F1⎇a⎇lot⎇more⎇about⎇the⎇thermal⎇state⎇of⎇the⎇house⎇and⎇its⎇own⎇state⎇of⎇health.
374⎇⎇␈F1␈=80⎇In⎇the⎇first⎇place,⎇while⎇the⎇present⎇system⎇␈F2couldn't␈F1⎇turn⎇off⎇the⎇flow⎇of⎇hot⎇water⎇upstairs,
375⎇⎇␈F1there⎇is⎇no⎇reason⎇to⎇ascribe⎇to⎇it⎇the⎇␈F2knowledge␈F1⎇that⎇it⎇couldn't,⎇and⎇␈F2a⎇fortiori␈F1⎇it⎇had⎇no⎇ability⎇to
376⎇⎇␈F1␈F2communicate␈F1⎇this⎇␈F2fact␈F1⎇or⎇to⎇take⎇it⎇into⎇account⎇in⎇controlling⎇the⎇system.⎇ A⎇more⎇advanced⎇system
377⎇⎇␈F1would⎇know⎇whether⎇the⎇␈F2actions␈F1⎇it⎇␈F2attempted␈F1⎇succeeded,⎇and⎇it⎇would⎇communicate⎇failures⎇and
378⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16415⎇␈←
379⎇⎇␈F1adapt⎇to⎇them.⎇ (We⎇adapted⎇to⎇the⎇failure⎇by⎇turning⎇off⎇the⎇whole⎇system⎇until⎇the⎇whole⎇house
380⎇⎇␈F1cooled⎇off⎇and⎇then⎇letting⎇the⎇two⎇parts⎇warm⎇up⎇together.⎇ The⎇present⎇system⎇has⎇the⎇␈F2physical
381⎇⎇␈F2capability␈F1⎇of⎇doing⎇this⎇even⎇if⎇it⎇hasn't⎇the⎇␈F2knowledge␈F1⎇or⎇the⎇␈F2will␈F1.
382⎇⎇␈F1␈=80⎇While⎇the⎇thermostat⎇believes⎇"The⎇room⎇is⎇too⎇cold",⎇there⎇is⎇no⎇need⎇to⎇say⎇that⎇it
383⎇⎇␈F1understands⎇the⎇concept⎇of⎇"too⎇cold".⎇ The⎇internal⎇structure⎇of⎇"The⎇room⎇is⎇too⎇cold"⎇is⎇a⎇part
384⎇⎇␈F1of⎇our⎇language,⎇not⎇its.
385⎇⎇␈F1␈=80⎇Consider⎇a⎇thermostat⎇whose⎇wires⎇to⎇the⎇furnace⎇have⎇been⎇cut.⎇ Shall⎇we⎇still⎇say⎇that⎇it
386⎇⎇␈F1knows⎇whether⎇the⎇room⎇is⎇too⎇cold?⎇ Since⎇fixing⎇the⎇thermostat⎇might⎇well⎇be⎇aided⎇by⎇ascribing
387⎇⎇␈F1this⎇knowledge,⎇we⎇would⎇like⎇to⎇do⎇so.⎇ Our⎇excuse⎇is⎇that⎇we⎇are⎇entitled⎇to⎇distinguish⎇-⎇in⎇our
388⎇⎇␈F1language⎇-⎇the⎇concept⎇of⎇a⎇broken⎇temperature⎇control⎇system⎇from⎇the⎇concept⎇of⎇a⎇certain
389⎇⎇␈F1collection⎇of⎇parts,⎇i.e.⎇to⎇make⎇intensional⎇characterizations⎇of⎇physical⎇objects.
390⎇⎇␈F1␈=80⎇2.⎇␈F3Self-reproducing⎇intelligent⎇configurations⎇in⎇a⎇cellular⎇automaton⎇world␈F1.⎇ A⎇␈F2cellular␈F1
391⎇⎇␈F1␈F2automaton␈F1⎇␈F2system␈F1⎇assigns⎇a⎇finite⎇automaton⎇to⎇each⎇point⎇of⎇the⎇plane⎇with⎇integer⎇co-ordinates.
392⎇⎇␈F1The⎇state⎇of⎇each⎇automaton⎇at⎇time⎇␈F2t+1␈F1⎇depends⎇on⎇its⎇state⎇at⎇time⎇␈F2t␈F1⎇and⎇the⎇states⎇of⎇its
393⎇⎇␈F1neighbors⎇at⎇time⎇␈F2t␈F1.⎇ An⎇early⎇use⎇of⎇cellular⎇automata⎇was⎇by⎇von⎇Neumann⎇(196?)⎇who⎇found⎇a
394⎇⎇␈F127⎇state⎇automaton⎇whose⎇cells⎇could⎇be⎇initialized⎇into⎇a⎇self-reproducing⎇configuration⎇that⎇was
395⎇⎇␈F1also⎇a⎇universal⎇computer.⎇ The⎇basic⎇automaton⎇in⎇von⎇Neumann's⎇system⎇had⎇a⎇"resting"⎇state⎇0,
396⎇⎇␈F1and⎇a⎇point⎇in⎇state⎇0⎇whose⎇four⎇neighbors⎇were⎇also⎇in⎇that⎇state⎇would⎇remain⎇in⎇state⎇0.⎇ The
397⎇⎇␈F1initial⎇configurations⎇considered⎇had⎇all⎇but⎇a⎇finite⎇number⎇of⎇cells⎇in⎇state⎇0,⎇and,⎇of⎇course,⎇this
398⎇⎇␈F1property⎇would⎇persist⎇although⎇the⎇number⎇of⎇non-zero⎇cells⎇might⎇grow⎇indefinitely⎇with⎇time.
399⎇⎇␈F1␈=80⎇The⎇self-reproducing⎇system⎇used⎇the⎇states⎇of⎇a⎇long⎇strip⎇of⎇non-zero⎇cells⎇as⎇a⎇"tape"
400⎇⎇␈F1containing⎇instructions⎇to⎇a⎇"universal⎇constructor"⎇configuration⎇that⎇would⎇construct⎇a⎇copy⎇of
401⎇⎇␈F1the⎇configuration⎇to⎇be⎇reproduced⎇but⎇with⎇each⎇cell⎇in⎇a⎇passive⎇state⎇that⎇would⎇persist⎇as⎇long
402⎇⎇␈F1as⎇its⎇neighbors⎇were⎇also⎇in⎇passive⎇states.⎇ After⎇the⎇construction⎇phase,⎇the⎇tape⎇would⎇be
403⎇⎇␈F1copied⎇to⎇make⎇the⎇tape⎇for⎇the⎇new⎇machine,⎇and⎇then⎇the⎇new⎇system⎇would⎇be⎇set⎇in⎇motion⎇by
404⎇⎇␈F1activating⎇one⎇of⎇its⎇cells.⎇The⎇new⎇system⎇would⎇then⎇move⎇away⎇from⎇its⎇mother,⎇and⎇the⎇process
405⎇⎇␈F1would⎇start⎇over.⎇ The⎇purpose⎇of⎇the⎇design⎇was⎇to⎇demonstrate⎇that⎇arbitrarily⎇complex
406⎇⎇␈F1configurations⎇could⎇be⎇self-reproducing⎇-⎇the⎇complexity⎇being⎇assured⎇by⎇also⎇requiring⎇that
407⎇⎇␈F1they⎇be⎇universal⎇computers.
408⎇⎇␈F1␈=80⎇Since⎇von⎇Neumann's⎇time,⎇simpler⎇basic⎇cells⎇admitting⎇self-reproducing⎇universal
409⎇⎇␈F1computers⎇have⎇been⎇discovered.⎇ The⎇simplest⎇so⎇far⎇is⎇the⎇two⎇state⎇Life⎇automaton⎇of⎇John
410⎇⎇␈F1Conway⎇(Gosper⎇1976).⎇ The⎇state⎇of⎇a⎇cell⎇at⎇time⎇␈F2t+1␈F1⎇is⎇determined⎇its⎇state⎇at⎇time⎇␈F2t␈F1⎇and⎇the
411⎇⎇␈F1states⎇of⎇its⎇eight⎇neighbors⎇at⎇time⎇␈F2t.␈F1⎇Namely,⎇a⎇point⎇whose⎇state⎇is⎇0⎇will⎇change⎇to⎇state⎇1⎇if
412⎇⎇␈F1exactly⎇three⎇of⎇its⎇neighbors⎇are⎇in⎇state⎇1.⎇ A⎇point⎇whose⎇state⎇is⎇1⎇will⎇remain⎇in⎇state⎇1⎇if⎇two
413⎇⎇␈F1or⎇three⎇of⎇its⎇neighbors⎇are⎇in⎇state⎇1.⎇ In⎇all⎇other⎇cases⎇the⎇state⎇becomes⎇or⎇remains⎇0.
414⎇⎇␈F1␈=80⎇Although⎇this⎇was⎇not⎇Conway's⎇reason⎇for⎇introducing⎇them,⎇Conway⎇and⎇Gosper⎇have
415⎇⎇␈F1shown⎇that⎇self-reproducing⎇universal⎇computers⎇could⎇be⎇built⎇up⎇as⎇Life⎇configurations.
416⎇⎇␈F1␈=80⎇Consider⎇a⎇number⎇of⎇such⎇self-reproducing⎇universal⎇computers⎇operating⎇in⎇the⎇Life
417⎇⎇␈F1plane,⎇and⎇suppose⎇that⎇they⎇have⎇been⎇programmed⎇to⎇study⎇the⎇properties⎇of⎇their⎇world⎇and⎇to
418⎇⎇␈F1communicate⎇among⎇themselves⎇about⎇it⎇and⎇pursue⎇various⎇goals⎇co-operatively⎇and
419⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16416⎇␈←
420⎇⎇␈F1competitively.⎇ Call⎇these⎇configurations⎇Life⎇robots.⎇ In⎇some⎇respects⎇their⎇intellectual⎇and
421⎇⎇␈F1scientific⎇problems⎇will⎇be⎇like⎇ours,⎇but⎇in⎇one⎇major⎇respect⎇they⎇live⎇in⎇a⎇simpler⎇world⎇than
422⎇⎇␈F1ours⎇seems⎇to⎇be.⎇ Namely,⎇the⎇fundamental⎇physics⎇of⎇their⎇world⎇is⎇that⎇of⎇the⎇life⎇automaton,
423⎇⎇␈F1and⎇there⎇is⎇no⎇obstacle⎇to⎇each⎇robot⎇␈F2knowing␈F1⎇this⎇physics,⎇and⎇being⎇able⎇to⎇simulate⎇the
424⎇⎇␈F1evolution⎇of⎇a⎇life⎇configuration⎇given⎇the⎇initial⎇state.⎇ Moreover,⎇if⎇the⎇initial⎇state⎇of⎇the⎇robot
425⎇⎇␈F1world⎇is⎇finite⎇it⎇can⎇have⎇been⎇recorded⎇in⎇each⎇robot⎇in⎇the⎇beginning⎇or⎇else⎇recorded⎇on⎇a
426⎇⎇␈F1strip⎇of⎇cells⎇that⎇the⎇robots⎇can⎇read.⎇ (The⎇infinite⎇regress⎇of⎇having⎇to⎇describe⎇the⎇description
427⎇⎇␈F1is⎇avoided⎇by⎇providing⎇that⎇the⎇description⎇is⎇not⎇separately⎇described,⎇but⎇can⎇be⎇read⎇␈F2both␈F1⎇as⎇a
428⎇⎇␈F1description⎇of⎇the⎇world⎇␈F2and␈F1⎇as⎇a⎇description⎇of⎇itself.)
429⎇⎇␈F1␈=80⎇Since⎇these⎇robots⎇know⎇the⎇initial⎇state⎇of⎇their⎇world⎇and⎇its⎇laws⎇of⎇motion,⎇they⎇can
430⎇⎇␈F1simulate⎇as⎇much⎇of⎇its⎇history⎇as⎇they⎇want,⎇assuming⎇that⎇each⎇can⎇grow⎇into⎇unoccupied⎇space
431⎇⎇␈F1so⎇as⎇to⎇have⎇memory⎇to⎇store⎇the⎇states⎇of⎇the⎇world⎇being⎇simulated.⎇ This⎇simulation⎇is
432⎇⎇␈F1necessarily⎇slower⎇than⎇real⎇time,⎇so⎇they⎇can⎇never⎇catch⎇up⎇with⎇the⎇present⎇-⎇let⎇alone⎇predict
433⎇⎇␈F1the⎇future.⎇ This⎇is⎇obvious⎇if⎇the⎇simulation⎇is⎇carried⎇out⎇straightforwardly⎇by⎇updating⎇a⎇list⎇of
434⎇⎇␈F1currently⎇active⎇cells⎇in⎇the⎇simulated⎇world⎇according⎇to⎇the⎇Life⎇rule,⎇but⎇it⎇also⎇applies⎇to⎇any
435⎇⎇␈F1clever⎇mathematical⎇method⎇that⎇might⎇predict⎇millions⎇of⎇steps⎇ahead⎇so⎇long⎇as⎇it⎇is⎇supposed⎇to
436⎇⎇␈F1be⎇applicable⎇to⎇all⎇Life⎇configurations.⎇ (Some⎇Life⎇configurations,⎇e.g.⎇static⎇ones⎇or⎇ones
437⎇⎇␈F1containing⎇single⎇␈F2gliders␈F1⎇or⎇␈F2cannon␈F1⎇can⎇have⎇their⎇distant⎇futures⎇predicted⎇with⎇little⎇computing.)
438⎇⎇␈F1Namely,⎇if⎇there⎇were⎇an⎇algorithm⎇for⎇such⎇prediction,⎇a⎇robot⎇could⎇be⎇made⎇that⎇would⎇predict
439⎇⎇␈F1its⎇own⎇future⎇and⎇then⎇disobey⎇the⎇prediction.⎇ The⎇detailed⎇proof⎇would⎇be⎇analogous⎇to⎇the
440⎇⎇␈F1proof⎇of⎇unsolvability⎇of⎇the⎇halting⎇problem⎇for⎇Turing⎇machines.
441⎇⎇␈F1␈=80⎇Now⎇we⎇come⎇to⎇the⎇point⎇of⎇this⎇long⎇disquisition.⎇ Suppose⎇we⎇wish⎇to⎇program⎇a⎇robot⎇to
442⎇⎇␈F1be⎇successful⎇in⎇the⎇Life⎇world⎇in⎇competition⎇or⎇co-operation⎇with⎇the⎇others.⎇ Without⎇any⎇idea
443⎇⎇␈F1of⎇how⎇to⎇give⎇a⎇mathematical⎇proof,⎇I⎇will⎇claim⎇that⎇our⎇robot⎇will⎇need⎇programs⎇that⎇ascribe
444⎇⎇␈F1purposes⎇and⎇beliefs⎇to⎇its⎇fellow⎇robots⎇and⎇predict⎇how⎇they⎇will⎇react⎇to⎇its⎇own⎇actions⎇by
445⎇⎇␈F1assuming⎇that⎇␈F2they⎇will⎇act⎇in⎇ways⎇that⎇they⎇believe⎇will⎇achieve⎇their⎇goals␈F1.⎇ Our⎇robot⎇might
446⎇⎇␈F1acquire⎇these⎇mental⎇theories⎇in⎇several⎇ways:⎇First,⎇we⎇might⎇design⎇the⎇universal⎇machine⎇so⎇that
447⎇⎇␈F1they⎇are⎇present⎇in⎇the⎇initial⎇configuration⎇of⎇the⎇world.⎇ Second,⎇we⎇might⎇program⎇it⎇to⎇acquire
448⎇⎇␈F1these⎇ideas⎇by⎇induction⎇from⎇its⎇experience⎇and⎇even⎇transmit⎇them⎇to⎇others⎇through⎇an
449⎇⎇␈F1"educational⎇system".⎇ Third,⎇it⎇might⎇derive⎇the⎇psychological⎇laws⎇from⎇the⎇fundamental⎇physics
450⎇⎇␈F1of⎇the⎇world⎇and⎇its⎇knowledge⎇of⎇the⎇initial⎇configuration.⎇ Finally,⎇it⎇might⎇discover⎇how⎇robots
451⎇⎇␈F1are⎇built⎇from⎇Life⎇cells⎇by⎇doing⎇experimental⎇"biology".
452⎇⎇␈F1␈=80⎇Knowing⎇the⎇Life⎇physics⎇without⎇some⎇information⎇about⎇the⎇initial⎇configuration⎇is
453⎇⎇␈F1insufficient⎇to⎇derive⎇the⎇␈F2psychological␈F1⎇laws,⎇because⎇robots⎇can⎇be⎇constructed⎇in⎇the⎇Life⎇world⎇in
454⎇⎇␈F1an⎇infinity⎇of⎇ways.⎇ This⎇follows⎇from⎇the⎇"folk⎇theorem"⎇that⎇the⎇Life⎇automaton⎇is⎇universal⎇in
455⎇⎇␈F1the⎇sense⎇that⎇any⎇cellular⎇automaton⎇can⎇be⎇constructed⎇by⎇taking⎇sufficiently⎇large⎇squares⎇of
456⎇⎇␈F1Life⎇cells⎇as⎇the⎇basic⎇cell⎇of⎇the⎇other⎇automaton.␈F59␈F1
457⎇⎇␈F1␈=80⎇Men⎇are⎇in⎇a⎇more⎇difficult⎇intellectual⎇position⎇than⎇Life⎇robots.⎇ We⎇don't⎇know⎇the
458⎇⎇␈F1fundamental⎇physics⎇of⎇our⎇world,⎇and⎇we⎇can't⎇even⎇be⎇sure⎇that⎇its⎇fundamental⎇physics⎇is
459⎇⎇␈F1describable⎇in⎇finite⎇terms.⎇ Even⎇if⎇we⎇knew⎇the⎇physical⎇laws,⎇they⎇seem⎇to⎇preclude⎇precise
460⎇⎇␈F1knowledge⎇of⎇an⎇initial⎇state⎇and⎇precise⎇calculation⎇of⎇its⎇future⎇both⎇for⎇quantum⎇mechanical
461⎇⎇␈F1reasons⎇and⎇because⎇the⎇continuous⎇functions⎇needed⎇to⎇represent⎇fields⎇seem⎇to⎇involve⎇an
462⎇⎇␈F1infinite⎇amount⎇of⎇information.
463⎇⎇␈F1␈=80⎇This⎇example⎇suggests⎇that⎇much⎇of⎇human⎇mental⎇structure⎇is⎇not⎇an⎇accident⎇of⎇evolution
464⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16417⎇␈←
465⎇⎇␈F1or⎇even⎇of⎇the⎇physics⎇of⎇our⎇world,⎇but⎇is⎇required⎇for⎇successful⎇problem⎇solving⎇behavior⎇and
466⎇⎇␈F1must⎇be⎇designed⎇into⎇or⎇evolved⎇by⎇any⎇system⎇that⎇exhibits⎇such⎇behavior.
467⎇⎇␈F1␈=80⎇3.⎇␈F3Computer⎇time-sharing⎇systems.␈F1⎇These⎇complicated⎇computer⎇programs⎇allocate
468⎇⎇␈F1computer⎇time⎇and⎇other⎇resources⎇among⎇users.⎇ They⎇allow⎇each⎇user⎇of⎇the⎇computer⎇to⎇behave
469⎇⎇␈F1as⎇though⎇he⎇had⎇a⎇computer⎇of⎇his⎇own,⎇but⎇also⎇allow⎇them⎇to⎇share⎇files⎇of⎇data⎇and⎇programs
470⎇⎇␈F1and⎇to⎇communicate⎇with⎇each⎇other.⎇ They⎇are⎇often⎇used⎇for⎇many⎇years⎇with⎇continual⎇small
471⎇⎇␈F1changes,⎇and⎇and⎇the⎇people⎇making⎇the⎇changes⎇and⎇correcting⎇errors⎇are⎇often⎇different⎇from
472⎇⎇␈F1the⎇original⎇authors⎇of⎇the⎇system.⎇ A⎇person⎇confronted⎇with⎇the⎇task⎇of⎇correcting⎇a⎇malfunction
473⎇⎇␈F1or⎇making⎇a⎇change⎇in⎇a⎇time-sharing⎇system⎇often⎇can⎇conveniently⎇use⎇a⎇mentalistic⎇model⎇of
474⎇⎇␈F1the⎇system.
475⎇⎇␈F1␈=80⎇Thus⎇suppose⎇a⎇user⎇complains⎇that⎇the⎇system⎇will⎇not⎇run⎇his⎇program.⎇ Perhaps⎇the
476⎇⎇␈F1system⎇believes⎇that⎇he⎇doesn't⎇want⎇to⎇run,⎇perhaps⎇it⎇persistently⎇believes⎇that⎇he⎇has⎇just⎇run,
477⎇⎇␈F1perhaps⎇it⎇believes⎇that⎇his⎇quota⎇of⎇computer⎇resources⎇is⎇exhausted,⎇or⎇perhaps⎇it⎇believes⎇that
478⎇⎇␈F1his⎇program⎇requires⎇a⎇resource⎇that⎇is⎇unavailable.⎇ Testing⎇these⎇hypotheses⎇can⎇often⎇be⎇done
479⎇⎇␈F1with⎇surprisingly⎇little⎇understanding⎇of⎇the⎇internal⎇workings⎇of⎇the⎇program.
480⎇⎇␈F1␈=80⎇4.⎇␈F3Programs⎇designed⎇to⎇reason.␈F1⎇Suppose⎇we⎇explicitly⎇design⎇a⎇program⎇to⎇represent
481⎇⎇␈F1information⎇by⎇sentences⎇in⎇a⎇certain⎇language⎇stored⎇in⎇the⎇memory⎇of⎇the⎇computer⎇and⎇decide
482⎇⎇␈F1what⎇to⎇do⎇by⎇making⎇inferences,⎇and⎇doing⎇what⎇it⎇concludes⎇will⎇advance⎇its⎇goals.⎇ Naturally,
483⎇⎇␈F1we⎇would⎇hope⎇that⎇our⎇previous⎇second⎇order⎇definition⎇of⎇belief⎇will⎇"approve⎇of"⎇a⎇␈F2B(p,s)␈F1⎇that
484⎇⎇␈F1ascribed⎇to⎇the⎇program⎇believing⎇the⎇sentences⎇explicitly⎇built⎇in.⎇ We⎇would⎇be⎇somewhat
485⎇⎇␈F1embarassed⎇if⎇someone⎇were⎇to⎇show⎇that⎇our⎇second⎇order⎇definition⎇approved⎇as⎇well⎇or⎇better
486⎇⎇␈F1of⎇an⎇entirely⎇different⎇set⎇of⎇beliefs.
487⎇⎇␈F1␈=80⎇Such⎇a⎇program⎇was⎇first⎇proposed⎇in⎇(McCarthy⎇1959),⎇and⎇here⎇is⎇how⎇it⎇might⎇work:
488⎇⎇␈F1␈=80⎇Information⎇about⎇the⎇world⎇is⎇stored⎇in⎇a⎇wide⎇variety⎇of⎇data⎇structures.⎇ For⎇example,⎇a
489⎇⎇␈F1visual⎇scene⎇received⎇by⎇a⎇TV⎇camera⎇may⎇be⎇represented⎇by⎇a⎇512x512x3⎇array⎇of⎇numbers
490⎇⎇␈F1representing⎇the⎇intensities⎇of⎇three⎇colors⎇at⎇the⎇points⎇of⎇the⎇visual⎇field.⎇ At⎇another⎇level,⎇the
491⎇⎇␈F1same⎇scene⎇may⎇be⎇represented⎇by⎇a⎇list⎇of⎇regions,⎇and⎇at⎇a⎇further⎇level⎇there⎇may⎇be⎇a⎇list⎇of
492⎇⎇␈F1physical⎇objects⎇and⎇their⎇parts⎇together⎇with⎇other⎇information⎇about⎇these⎇objects⎇obtained⎇from
493⎇⎇␈F1non-visual⎇sources.⎇ Moreover,⎇information⎇about⎇how⎇to⎇solve⎇various⎇kinds⎇of⎇problems⎇may⎇be
494⎇⎇␈F1represented⎇by⎇programs⎇in⎇some⎇programming⎇language.
495⎇⎇␈F1␈=80⎇However,⎇all⎇the⎇above⎇representations⎇are⎇subordinate⎇to⎇a⎇collection⎇of⎇sentences⎇in⎇a
496⎇⎇␈F1suitable⎇first⎇order⎇language⎇that⎇includes⎇set⎇theory.⎇ By⎇subordinate,⎇we⎇mean⎇that⎇there⎇are
497⎇⎇␈F1sentences⎇that⎇tell⎇what⎇the⎇data⎇structures⎇represent⎇and⎇what⎇the⎇programs⎇do.⎇ New⎇sentences
498⎇⎇␈F1can⎇arise⎇by⎇a⎇variety⎇of⎇processes:⎇inference⎇from⎇sentences⎇already⎇present,⎇by⎇computation⎇from
499⎇⎇␈F1the⎇data⎇structures⎇representing⎇observations,⎇and⎇by⎇interpreting⎇certain⎇inputs⎇as
500⎇⎇␈F1communications⎇in⎇a⎇one⎇or⎇more⎇languages.
501⎇⎇␈F1␈=80⎇The⎇construction⎇of⎇such⎇a⎇program⎇is⎇one⎇of⎇the⎇major⎇approaches⎇to⎇achieving⎇high⎇level
502⎇⎇␈F1artificial⎇intelligence,⎇and,⎇like⎇every⎇other⎇approach,⎇it⎇faces⎇numerous⎇obstacles.⎇ These⎇obstacles
503⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16418⎇␈←
504⎇⎇␈F1can⎇be⎇divided⎇into⎇two⎇classes⎇-⎇␈F2epistemological␈F1⎇and⎇␈F2heuristic.␈F1⎇The⎇epistemological⎇problem⎇is⎇to
505⎇⎇␈F1determine⎇what⎇information⎇about⎇the⎇world⎇is⎇to⎇be⎇represented⎇in⎇the⎇sentences⎇and⎇other⎇data
506⎇⎇␈F1structures,⎇and⎇the⎇heuristic⎇problem⎇is⎇to⎇decide⎇how⎇the⎇information⎇can⎇be⎇used⎇effectively⎇to
507⎇⎇␈F1solve⎇problems.⎇ Naturally,⎇the⎇problems⎇interact,⎇but⎇the⎇epistemological⎇problem⎇is⎇more⎇basic
508⎇⎇␈F1and⎇also⎇more⎇relevant⎇to⎇our⎇present⎇concerns.⎇ We⎇could⎇regard⎇it⎇as⎇solved⎇if⎇we⎇knew⎇how⎇to
509⎇⎇␈F1express⎇the⎇information⎇needed⎇for⎇intelligent⎇behavior⎇so⎇that⎇the⎇solution⎇to⎇problems⎇logically
510⎇⎇␈F1followed⎇from⎇the⎇data.⎇ The⎇heuristic⎇problem⎇of⎇actually⎇obtaining⎇the⎇solutions⎇would⎇remain.
511⎇⎇␈F1␈=80⎇The⎇information⎇to⎇be⎇represented⎇can⎇be⎇roughly⎇divided⎇into⎇general⎇information⎇about
512⎇⎇␈F1the⎇world⎇and⎇information⎇about⎇particular⎇situations.⎇ The⎇formalism⎇used⎇to⎇represent
513⎇⎇␈F1information⎇about⎇the⎇world⎇must⎇be⎇␈F2epistemologically␈F1⎇␈F2adequate,␈F1⎇i.e.⎇it⎇must⎇be⎇capable⎇of
514⎇⎇␈F1representing⎇the⎇information⎇that⎇is⎇actually⎇available⎇to⎇the⎇program⎇from⎇its⎇sensory⎇apparatus
515⎇⎇␈F1or⎇can⎇be⎇deduced.⎇ Thus⎇it⎇couldn't⎇handle⎇available⎇information⎇about⎇a⎇cup⎇of⎇hot⎇coffee⎇if⎇its
516⎇⎇␈F1only⎇way⎇of⎇representing⎇information⎇about⎇fluids⎇was⎇in⎇terms⎇of⎇the⎇positions⎇and⎇velocities⎇of
517⎇⎇␈F1the⎇molecules.⎇ Even⎇the⎇hydrodynamicist's⎇Eulerian⎇distributions⎇of⎇density,⎇velocity,⎇temperature
518⎇⎇␈F1and⎇pressure⎇would⎇be⎇useless⎇for⎇representing⎇the⎇information⎇actually⎇obtainable⎇from⎇a
519⎇⎇␈F1television⎇camera.⎇ These⎇considerations⎇are⎇further⎇discussed⎇in⎇(McCarthy⎇and⎇Hayes⎇1969).
520⎇⎇␈F1␈=80⎇Here⎇are⎇some⎇of⎇the⎇kinds⎇of⎇general⎇information⎇that⎇will⎇have⎇to⎇be⎇represented:
521⎇⎇␈F1␈=80⎇1.⎇Narrative.⎇ Events⎇occur⎇in⎇space⎇and⎇time.⎇ Some⎇events⎇are⎇extended⎇in⎇time.⎇ Partial
522⎇⎇␈F1information⎇must⎇be⎇expressed⎇about⎇what⎇events⎇begin⎇or⎇end⎇during,⎇before⎇and⎇after⎇others.
523⎇⎇␈F1Partial⎇information⎇about⎇places⎇and⎇their⎇spacial⎇relations⎇must⎇be⎇expressible.⎇ Sometimes
524⎇⎇␈F1dynamic⎇information⎇such⎇as⎇velocities⎇are⎇better⎇known⎇than⎇the⎇space-time⎇facts⎇in⎇terms⎇of
525⎇⎇␈F1which⎇they⎇are⎇defined.
526⎇⎇␈F1␈=80⎇2.⎇Partial⎇information⎇about⎇causal⎇systems.⎇ Quantities⎇have⎇values⎇and⎇later⎇have
527⎇⎇␈F1different⎇values.⎇ Causal⎇laws⎇relate⎇these⎇values.
528⎇⎇␈F1␈=80⎇3.⎇Some⎇changes⎇are⎇results⎇of⎇actions⎇by⎇the⎇program⎇and⎇other⎇actors.⎇ Information⎇about
529⎇⎇␈F1the⎇effects⎇of⎇actions⎇can⎇be⎇used⎇to⎇determine⎇what⎇goals⎇can⎇be⎇achieved⎇in⎇given⎇circumstances.
530⎇⎇␈F1␈=80⎇4.⎇Objects⎇and⎇substances⎇have⎇locations⎇in⎇space.⎇ It⎇may⎇be⎇that⎇temporal⎇and⎇causal⎇facts
531⎇⎇␈F1are⎇prior⎇to⎇spatial⎇facts⎇in⎇the⎇formalism.
532⎇⎇␈F1␈=80⎇5.⎇Some⎇objects⎇are⎇actors⎇with⎇beliefs,⎇purposes⎇and⎇intentions.
533⎇⎇␈F1␈=80⎇Of⎇course,⎇the⎇above⎇English⎇description⎇is⎇no⎇substitute⎇for⎇an⎇axiomatized⎇formalism⎇-
534⎇⎇␈F1not⎇even⎇for⎇philosophy⎇but⎇␈F2a⎇fortiori␈F1⎇when⎇computer⎇programs⎇must⎇be⎇written.⎇ The⎇main
535⎇⎇␈F1difficulties⎇in⎇designing⎇such⎇a⎇formalism⎇involve⎇deciding⎇how⎇to⎇express⎇partial⎇information.
536⎇⎇␈F1(McCarthy⎇and⎇Hayes⎇1969)⎇uses⎇a⎇notion⎇of⎇␈F2situation␈F1⎇wherein⎇the⎇situation⎇is⎇never⎇known⎇-
537⎇⎇␈F1only⎇facts⎇about⎇situations⎇are⎇known.⎇ Unfortunately,⎇the⎇formalism⎇is⎇not⎇suitable⎇for⎇expressing
538⎇⎇␈F1what⎇might⎇be⎇known⎇when⎇events⎇are⎇taking⎇place⎇in⎇parallel⎇with⎇unknown⎇temporal⎇relations.
539⎇⎇␈F1It⎇also⎇only⎇treats⎇the⎇case⎇in⎇which⎇the⎇result⎇of⎇an⎇action⎇is⎇a⎇definite⎇new⎇situation⎇and
540⎇⎇␈F1therefore⎇is⎇isn't⎇suitable⎇for⎇describing⎇continuous⎇processes.
541⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16419⎇␈←
542⎇⎇␈F3␈→717⎇0⎇2⎇0⎇⎇"GLOSSARY" OF MENTAL QUALITIES␈←
543⎇⎇␈F1␈=80⎇In⎇this⎇section⎇we⎇give⎇short⎇"definitions"⎇for⎇machines⎇of⎇a⎇collection⎇of⎇mental⎇qualities.
544⎇⎇␈F1We⎇include⎇a⎇number⎇of⎇terms⎇which⎇give⎇us⎇difficulty⎇with⎇an⎇indication⎇of⎇what⎇the⎇difficulties
545⎇⎇␈F1seem⎇to⎇be.⎇ We⎇emphasize⎇the⎇place⎇of⎇these⎇concepts⎇in⎇the⎇design⎇of⎇intelligent⎇robots.
546⎇⎇␈F11.⎇␈F3Introspection⎇and⎇self-knowledge␈F1.⎇ We⎇say⎇that⎇a⎇machine⎇introspects⎇when⎇it⎇comes⎇to⎇have
547⎇⎇␈F1beliefs⎇about⎇its⎇own⎇mental⎇state.⎇ A⎇simple⎇form⎇of⎇introspection⎇takes⎇place⎇when⎇a⎇program
548⎇⎇␈F1determines⎇whether⎇it⎇has⎇certain⎇information⎇and⎇if⎇not⎇asks⎇for⎇it.⎇ Often⎇an⎇operating⎇system
549⎇⎇␈F1will⎇compute⎇a⎇check⎇sum⎇of⎇itself⎇every⎇few⎇minutes⎇to⎇verify⎇that⎇it⎇hasn't⎇been⎇changed⎇by⎇a
550⎇⎇␈F1software⎇or⎇hardware⎇malfunction.
551⎇⎇␈F1␈=80⎇In⎇principle,⎇introspection⎇is⎇easier⎇for⎇computer⎇programs⎇than⎇for⎇people,⎇because⎇the
552⎇⎇␈F1entire⎇memory⎇in⎇which⎇programs⎇and⎇data⎇are⎇stored⎇is⎇available⎇for⎇inspection.⎇ In⎇fact,⎇a
553⎇⎇␈F1computer⎇program⎇can⎇be⎇made⎇to⎇predict⎇how⎇it⎇would⎇react⎇to⎇particular⎇inputs⎇provided⎇it⎇has
554⎇⎇␈F1enough⎇free⎇storage⎇to⎇perform⎇the⎇calculation.⎇ This⎇situation⎇smells⎇of⎇paradox,⎇and⎇there⎇is⎇one.
555⎇⎇␈F1Namely,⎇if⎇a⎇program⎇could⎇predict⎇its⎇own⎇actions⎇in⎇less⎇time⎇than⎇it⎇takes⎇to⎇carry⎇out⎇the
556⎇⎇␈F1action,⎇it⎇could⎇refuse⎇to⎇do⎇what⎇it⎇has⎇predicted⎇for⎇itself.⎇ This⎇only⎇shows⎇that⎇self-simulation
557⎇⎇␈F1is⎇necessarily⎇a⎇slow⎇process,⎇and⎇this⎇is⎇not⎇surprising.
558⎇⎇␈F1␈=80⎇However,⎇present⎇programs⎇do⎇little⎇interesting⎇introspection.⎇ This⎇is⎇just⎇a⎇matter⎇of⎇the
559⎇⎇␈F1undeveloped⎇state⎇of⎇artificial⎇intelligence;⎇programmers⎇don't⎇yet⎇know⎇how⎇to⎇make⎇a⎇computer
560⎇⎇␈F1program⎇look⎇at⎇itself⎇in⎇a⎇useful⎇way.
561⎇⎇␈F12.⎇ ␈F3Consciousness⎇and⎇self-consciousness␈F1.⎇ Suppose⎇we⎇wish⎇to⎇distinguish⎇the⎇self-awareness⎇of⎇a
562⎇⎇␈F1machine,⎇animal⎇or⎇person⎇from⎇its⎇awareness⎇of⎇other⎇things.⎇ We⎇explicate⎇awareness⎇as⎇belief⎇in
563⎇⎇␈F1certain⎇sentences,⎇so⎇in⎇this⎇case⎇we⎇are⎇want⎇to⎇distinguish⎇those⎇sentences⎇or⎇those⎇terms⎇in⎇the
564⎇⎇␈F1sentences⎇that⎇may⎇be⎇considered⎇to⎇be⎇about⎇the⎇self.⎇ We⎇also⎇don't⎇expect⎇that⎇self-consciousness
565⎇⎇␈F1will⎇be⎇a⎇single⎇property⎇that⎇something⎇either⎇has⎇or⎇hasn't⎇but⎇rather⎇there⎇will⎇be⎇many⎇kinds
566⎇⎇␈F1of⎇self-awareness⎇with⎇humans⎇posessing⎇many⎇of⎇the⎇kinds⎇we⎇can⎇imagine.
567⎇⎇␈F1␈=80⎇Here⎇are⎇some⎇of⎇the⎇kinds⎇of⎇self-awareness:
568⎇⎇␈F1␈=160⎇2.1.⎇Certain⎇predicates⎇of⎇the⎇situation⎇(propositional⎇fluents⎇in⎇the⎇terminology⎇of
569⎇⎇␈F1(McCarthy⎇and⎇Hayes⎇1969))⎇are⎇directly⎇observable⎇in⎇almost⎇all⎇situations⎇while⎇others⎇often
570⎇⎇␈F1must⎇be⎇inferred.⎇ The⎇almost⎇always⎇observable⎇fluents⎇may⎇reasonably⎇be⎇identified⎇with⎇the
571⎇⎇␈F1senses.⎇ Likewise⎇the⎇values⎇of⎇certain⎇fluents⎇are⎇almost⎇always⎇under⎇the⎇control⎇of⎇the⎇being⎇and
572⎇⎇␈F1can⎇be⎇called⎇motor⎇parameters⎇for⎇lack⎇of⎇a⎇common⎇language⎇term.⎇ We⎇have⎇in⎇mind⎇the
573⎇⎇␈F1positions⎇of⎇the⎇joints.⎇ Most⎇motor⎇parameters⎇are⎇both⎇observable⎇and⎇controllable.⎇ I⎇am
574⎇⎇␈F1inclined⎇to⎇regard⎇the⎇posession⎇of⎇a⎇substantial⎇set⎇of⎇such⎇constantly⎇observable⎇or⎇controllable
575⎇⎇␈F1fluents⎇as⎇the⎇most⎇primitive⎇form⎇of⎇self-consciousness,⎇but⎇I⎇have⎇no⎇strong⎇arguments⎇against
576⎇⎇␈F1someone⎇who⎇wished⎇to⎇require⎇more.
577⎇⎇␈F1␈=160⎇2.2.⎇The⎇second⎇level⎇of⎇self-consciousness⎇requires⎇a⎇term⎇␈F2I␈F1⎇in⎇the⎇language⎇denoting
578⎇⎇␈F1the⎇self.⎇ ␈F2I␈F1⎇should⎇belong⎇to⎇the⎇class⎇of⎇persistent⎇objects⎇and⎇some⎇of⎇the⎇same⎇predicates⎇should
579⎇⎇␈F1be⎇applicable⎇to⎇it⎇as⎇are⎇applicable⎇to⎇other⎇objects.⎇ For⎇example,⎇like⎇other⎇objects⎇␈F2I␈F1⎇has⎇a
580⎇⎇␈F1location⎇that⎇can⎇change⎇in⎇time.⎇ ␈F2I␈F1⎇is⎇also⎇visible⎇and⎇impenetrable⎇like⎇other⎇objects.⎇ However,
581⎇⎇␈F1we⎇don't⎇want⎇to⎇get⎇carried⎇away⎇in⎇regarding⎇a⎇physical⎇body⎇as⎇a⎇necessary⎇condition⎇for⎇self-
582⎇⎇␈F1consciousness.⎇ Imagine⎇a⎇distributed⎇computer⎇whose⎇sense⎇and⎇motor⎇organs⎇could⎇also⎇be⎇in⎇a
583⎇⎇␈F1variety⎇of⎇places.⎇ We⎇don't⎇want⎇to⎇exclude⎇it⎇from⎇self-consciousness⎇by⎇definition.
584⎇⎇␈F1␈=160⎇2.3.⎇The⎇third⎇level⎇come⎇when⎇␈F2I␈F1⎇is⎇regarded⎇as⎇an⎇actor⎇among⎇others.⎇ The
585⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16420⎇␈←
586⎇⎇␈F1conditions⎇that⎇permit⎇␈F2I␈F1⎇to⎇do⎇something⎇are⎇similar⎇to⎇the⎇conditions⎇that⎇permit⎇other⎇actors⎇to
587⎇⎇␈F1do⎇similar⎇things.
588⎇⎇␈F1␈=160⎇2.4.⎇The⎇fourth⎇level⎇requires⎇the⎇applicability⎇of⎇predicates⎇such⎇as⎇␈F2believes,␈F1⎇␈F2wants␈F1
589⎇⎇␈F1and⎇␈F2can␈F1⎇to⎇␈F2I.␈F1⎇Beliefs⎇about⎇past⎇situations⎇and⎇the⎇ability⎇to⎇hypothesize⎇future⎇situations⎇are⎇also
590⎇⎇␈F1required⎇for⎇this⎇level.
591⎇⎇␈F13.⎇␈F3Language⎇and⎇thought␈F1.⎇ Here⎇is⎇a⎇hypothesis⎇arising⎇from⎇artificial⎇intelligence⎇concerning⎇the
592⎇⎇␈F1relation⎇between⎇language⎇and⎇thought.⎇ Imagine⎇a⎇person⎇or⎇machine⎇that⎇represents⎇information
593⎇⎇␈F1internally⎇in⎇a⎇huge⎇network.⎇ Each⎇node⎇of⎇the⎇network⎇has⎇references⎇to⎇other⎇nodes⎇through
594⎇⎇␈F1relations.⎇ (If⎇the⎇system⎇has⎇a⎇variable⎇collection⎇of⎇relations,⎇then⎇the⎇relations⎇have⎇to⎇be
595⎇⎇␈F1represented⎇by⎇nodes,⎇and⎇we⎇get⎇a⎇symmetrical⎇theory⎇if⎇we⎇suppose⎇that⎇each⎇node⎇is⎇connected
596⎇⎇␈F1to⎇a⎇set⎇of⎇pairs⎇of⎇other⎇nodes).⎇ We⎇can⎇imagine⎇this⎇structure⎇to⎇have⎇a⎇long⎇term⎇part⎇and⎇also
597⎇⎇␈F1extremely⎇temporary⎇parts⎇representing⎇current⎇␈F2thoughts␈F1.⎇ Naturally,⎇each⎇being⎇has⎇a⎇its⎇own
598⎇⎇␈F1network⎇depending⎇on⎇its⎇own⎇experience.⎇A⎇thought⎇is⎇then⎇a⎇temporary⎇node⎇currently⎇being
599⎇⎇␈F1referenced⎇by⎇the⎇mechanism⎇of⎇consciousness.⎇ Its⎇meaning⎇is⎇determined⎇by⎇its⎇references⎇to
600⎇⎇␈F1other⎇nodes⎇which⎇in⎇turn⎇refer⎇to⎇yet⎇other⎇nodes.⎇ Now⎇consider⎇the⎇problem⎇of⎇communicating
601⎇⎇␈F1a⎇thought⎇to⎇another⎇being.
602⎇⎇␈F1␈=80⎇Its⎇full⎇communication⎇would⎇involve⎇transmitting⎇the⎇entire⎇network⎇that⎇can⎇be⎇reached
603⎇⎇␈F1from⎇the⎇given⎇node,⎇and⎇this⎇would⎇ordinarily⎇constitute⎇the⎇entire⎇experience⎇of⎇the⎇being.
604⎇⎇␈F1More⎇than⎇that,⎇it⎇would⎇be⎇necessary⎇to⎇also⎇communicate⎇the⎇programs⎇that⎇that⎇take⎇action⎇on
605⎇⎇␈F1the⎇basis⎇of⎇encountering⎇certain⎇nodes.⎇ Even⎇if⎇all⎇this⎇could⎇be⎇transmitted,⎇the⎇recipient⎇would
606⎇⎇␈F1still⎇have⎇to⎇find⎇equivalents⎇for⎇the⎇information⎇in⎇terms⎇of⎇its⎇own⎇network.⎇ Therefore,
607⎇⎇␈F1thoughts⎇have⎇to⎇be⎇translated⎇into⎇a⎇public⎇language⎇before⎇they⎇can⎇be⎇communicated.
608⎇⎇␈F1␈=80⎇A⎇language⎇is⎇also⎇a⎇network⎇of⎇associations⎇and⎇programs.⎇ However,⎇certain⎇of⎇the⎇nodes
609⎇⎇␈F1in⎇this⎇network⎇(more⎇accurately⎇a⎇␈F2family␈F1⎇of⎇networks,⎇since⎇no⎇two⎇people⎇speak⎇precisely⎇the
610⎇⎇␈F1same⎇language)⎇are⎇associated⎇with⎇words⎇or⎇set⎇phrases.⎇ Sometimes⎇the⎇translation⎇from⎇thoughts
611⎇⎇␈F1to⎇sentences⎇is⎇easy,⎇because⎇large⎇parts⎇of⎇the⎇private⎇networks⎇are⎇taken⎇from⎇the⎇public⎇network,
612⎇⎇␈F1and⎇there⎇is⎇an⎇advantage⎇in⎇preserving⎇the⎇correspondence.⎇ However,⎇the⎇translation⎇is⎇always
613⎇⎇␈F1approximate⎇(in⎇sense⎇that⎇still⎇lacks⎇a⎇technical⎇definition),⎇and⎇some⎇areas⎇of⎇experience⎇are
614⎇⎇␈F1difficult⎇to⎇translate⎇at⎇all.⎇ Sometimes⎇this⎇is⎇for⎇intrinsic⎇reasons,⎇and⎇sometimes⎇because
615⎇⎇␈F1particular⎇cultures⎇don't⎇use⎇language⎇in⎇this⎇area.⎇ (It⎇is⎇my⎇impression⎇that⎇cultures⎇differ⎇in⎇the
616⎇⎇␈F1extent⎇to⎇which⎇information⎇about⎇facial⎇appearance⎇that⎇can⎇be⎇used⎇for⎇recognition⎇is⎇verbally
617⎇⎇␈F1transmitted).⎇ According⎇to⎇this⎇scheme,⎇the⎇"deep⎇structure"⎇of⎇a⎇publicly⎇expressible⎇thought⎇is⎇a
618⎇⎇␈F1node⎇in⎇the⎇public⎇network.⎇ It⎇is⎇translated⎇into⎇the⎇deep⎇structure⎇of⎇a⎇sentence⎇as⎇a⎇tree⎇whose
619⎇⎇␈F1terminal⎇nodes⎇are⎇the⎇nodes⎇to⎇which⎇words⎇or⎇set⎇phrases⎇are⎇attached.⎇ This⎇"deep⎇structure"
620⎇⎇␈F1then⎇must⎇be⎇translated⎇into⎇a⎇string⎇in⎇a⎇spoken⎇or⎇written⎇language.
621⎇⎇␈F1␈=80⎇The⎇need⎇to⎇use⎇language⎇to⎇express⎇thought⎇also⎇applies⎇when⎇we⎇have⎇to⎇ascribe⎇thoughts
622⎇⎇␈F1to⎇other⎇beings,⎇since⎇we⎇cannot⎇put⎇the⎇entire⎇network⎇into⎇a⎇single⎇sentence.
623⎇⎇␈F14.⎇␈F3Intentions␈F1.⎇ We⎇are⎇tempted⎇to⎇say⎇that⎇a⎇machine⎇␈F2intends␈F1⎇to⎇perform⎇an⎇action⎇when⎇it
624⎇⎇␈F1believes⎇it⎇will⎇and⎇also⎇believes⎇that⎇it⎇could⎇do⎇otherwise.⎇ However,⎇we⎇will⎇resist⎇this
625⎇⎇␈F1temptation⎇and⎇propose⎇that⎇a⎇predicate⎇␈F2intends(actor,action,state)␈F1⎇be⎇suitably⎇axiomatized⎇where
626⎇⎇␈F1one⎇of⎇the⎇axioms⎇say⎇that⎇the⎇machine⎇intends⎇the⎇action⎇if⎇it⎇believes⎇it⎇will⎇perform⎇the⎇action
627⎇⎇␈F1and⎇could⎇do⎇otherwise.⎇ Armstrong⎇(1968)⎇wants⎇to⎇require⎇an⎇element⎇of⎇servo-mechanism⎇in
628⎇⎇␈F1order⎇that⎇a⎇belief⎇that⎇an⎇action⎇will⎇be⎇performed⎇be⎇regarded⎇as⎇an⎇intention,⎇i.e.⎇there⎇should
629⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16421⎇␈←
630⎇⎇␈F1be⎇a⎇commitment⎇to⎇do⎇it⎇one⎇way⎇or⎇another.⎇ There⎇may⎇be⎇good⎇reasons⎇to⎇allow⎇several
631⎇⎇␈F1versions⎇of⎇intention⎇to⎇co-exist⎇in⎇the⎇same⎇formalism.
632⎇⎇␈F15.⎇␈F3Free⎇will␈F1.⎇ When⎇we⎇program⎇a⎇computer⎇to⎇make⎇choices⎇intelligently⎇after⎇determining⎇its
633⎇⎇␈F1options,⎇examining⎇their⎇consequences,⎇and⎇deciding⎇which⎇is⎇most⎇favorable⎇or⎇most⎇moral⎇or
634⎇⎇␈F1whatever,⎇we⎇must⎇program⎇it⎇to⎇take⎇an⎇attitude⎇towards⎇its⎇freedom⎇of⎇choice⎇essentially
635⎇⎇␈F1isomorphic⎇to⎇that⎇which⎇a⎇human⎇must⎇take⎇to⎇his⎇own.⎇ A⎇program⎇will⎇have⎇to⎇take⎇such⎇an
636⎇⎇␈F1attitude⎇towards⎇another⎇unless⎇it⎇knows⎇the⎇details⎇of⎇the⎇other's⎇construction⎇and⎇present⎇state.
637⎇⎇␈F1␈=80⎇We⎇can⎇define⎇whether⎇a⎇particular⎇action⎇was⎇free⎇or⎇forced⎇␈F2relative⎇to⎇a⎇theory␈F1⎇that
638⎇⎇␈F1ascribes⎇beliefs⎇and⎇within⎇which⎇beings⎇do⎇what⎇they⎇believe⎇will⎇advance⎇their⎇goals.⎇ In⎇such⎇a
639⎇⎇␈F1theory,⎇action⎇is⎇precipitated⎇by⎇a⎇belief⎇of⎇the⎇form⎇␈F2I⎇should⎇do⎇X⎇now␈F1.⎇ We⎇will⎇say⎇that⎇the
640⎇⎇␈F1action⎇was⎇free⎇if⎇changing⎇the⎇belief⎇to⎇␈F2I⎇shouldn't⎇do⎇X⎇now␈F1⎇would⎇have⎇resulted⎇in⎇the⎇action
641⎇⎇␈F1not⎇being⎇performed.⎇ This⎇requires⎇that⎇the⎇theory⎇of⎇belief⎇have⎇sufficient⎇Cartesian⎇product
642⎇⎇␈F1structure⎇so⎇that⎇changing⎇a⎇single⎇belief⎇is⎇defined,⎇but⎇it⎇doesn't⎇require⎇defining⎇what⎇the⎇state
643⎇⎇␈F1of⎇the⎇world⎇would⎇be⎇if⎇a⎇single⎇belief⎇were⎇different.
644⎇⎇␈F1␈=80⎇It⎇may⎇be⎇possible⎇to⎇separate⎇the⎇notion⎇of⎇a⎇␈F2free⎇action␈F1⎇into⎇a⎇technical⎇part⎇and⎇a
645⎇⎇␈F1controversial⎇part.⎇ The⎇technical⎇part⎇would⎇define⎇freedom⎇relative⎇to⎇an⎇approximate⎇co-
646⎇⎇␈F1ordinate⎇system⎇giving⎇the⎇necessary⎇Cartesian⎇product⎇structure.⎇ Relative⎇to⎇the⎇co-
647⎇⎇␈F1ordinatization,⎇the⎇freedom⎇of⎇a⎇particular⎇action⎇would⎇be⎇a⎇technical⎇issue,⎇but⎇people⎇could
648⎇⎇␈F1argue⎇about⎇whether⎇to⎇accept⎇the⎇whole⎇co-ordinate⎇system.
649⎇⎇␈F1␈=80⎇This⎇isn't⎇the⎇whole⎇free⎇will⎇story,⎇because⎇moralists⎇are⎇also⎇concerned⎇with⎇whether⎇praise
650⎇⎇␈F1or⎇blame⎇may⎇be⎇attributed⎇to⎇a⎇choice.⎇ The⎇following⎇considerations⎇would⎇seem⎇to⎇apply⎇to⎇any
651⎇⎇␈F1attempt⎇to⎇define⎇the⎇morality⎇of⎇actions⎇in⎇a⎇way⎇that⎇would⎇apply⎇to⎇machines:
652⎇⎇␈F1␈=160⎇5.1.⎇There⎇is⎇unlikely⎇to⎇be⎇a⎇simple⎇behavioral⎇definition.⎇ Instead⎇there⎇would⎇be⎇a
653⎇⎇␈F1second⎇order⎇definition⎇criticizing⎇predicates⎇that⎇ascribe⎇morality⎇to⎇actions.
654⎇⎇␈F1␈=160⎇5.2.⎇The⎇theory⎇must⎇contain⎇at⎇least⎇one⎇axiom⎇of⎇morality⎇that⎇is⎇not⎇just⎇a
655⎇⎇␈F1statement⎇of⎇physical⎇fact.⎇ Relative⎇to⎇this⎇axiom,⎇moral⎇judgments⎇of⎇actions⎇can⎇be⎇factual.
656⎇⎇␈F1␈=160⎇5.3.⎇The⎇theory⎇of⎇morality⎇will⎇presuppose⎇a⎇theory⎇of⎇belief⎇in⎇which⎇statements⎇of
657⎇⎇␈F1the⎇form⎇␈F2"It⎇believed⎇the⎇action⎇would⎇harm⎇someone"␈F1⎇are⎇defined.⎇ The⎇theory⎇must⎇ascribe⎇beliefs
658⎇⎇␈F1about⎇others'⎇welfare⎇and⎇perhaps⎇about⎇the⎇being's⎇own⎇welfare.
659⎇⎇␈F1␈=160⎇5.4.⎇It⎇might⎇be⎇necessary⎇to⎇consider⎇the⎇machine⎇as⎇imbedded⎇in⎇some⎇kind⎇of
660⎇⎇␈F1society⎇in⎇order⎇to⎇ascribe⎇morality⎇to⎇its⎇actions.
661⎇⎇␈F1␈=160⎇5.5.⎇No⎇present⎇machines⎇admit⎇such⎇a⎇belief⎇structure,⎇and⎇no⎇such⎇structure⎇may⎇be
662⎇⎇␈F1required⎇to⎇make⎇a⎇machine⎇with⎇arbitrarily⎇high⎇intelligence⎇in⎇the⎇sense⎇of⎇problem-solving
663⎇⎇␈F1ability.
664⎇⎇␈F1␈=160⎇5.6.⎇It⎇seems⎇unlikely⎇that⎇morally⎇judgable⎇machines⎇or⎇machines⎇to⎇which⎇rights
665⎇⎇␈F1might⎇legitimately⎇be⎇ascribed⎇should⎇be⎇made⎇if⎇and⎇when⎇it⎇becomes⎇possible⎇to⎇do⎇so.
666⎇⎇␈F16.⎇␈F3Understanding␈F1.⎇It⎇seems⎇to⎇me⎇that⎇understanding⎇the⎇concept⎇of⎇understanding⎇is⎇fundamental
667⎇⎇␈F1and⎇difficult.⎇ The⎇first⎇difficulty⎇lies⎇in⎇determining⎇what⎇the⎇operand⎇is.⎇ What⎇is⎇the⎇"theory⎇of
668⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16422⎇␈←
669⎇⎇␈F1relativity"⎇in⎇␈F2"Pat⎇understands⎇the⎇theory⎇of⎇relativity"␈F1?⎇ What⎇does⎇"misunderstand"⎇mean?⎇ It
670⎇⎇␈F1seems⎇that⎇understanding⎇should⎇involve⎇knowing⎇a⎇certain⎇collection⎇of⎇facts⎇including⎇the
671⎇⎇␈F1general⎇laws⎇that⎇permit⎇deducing⎇the⎇answers⎇to⎇questions.⎇ We⎇probably⎇want⎇to⎇separate
672⎇⎇␈F1understanding⎇from⎇issues⎇of⎇cleverness⎇and⎇creativity.
673⎇⎇␈F17.⎇␈F3Creativity␈F1.⎇ This⎇may⎇be⎇easier⎇than⎇"understanding"⎇at⎇least⎇if⎇we⎇confine⎇our⎇attention⎇to
674⎇⎇␈F1reasoning⎇processes.⎇ Many⎇problem⎇solutions⎇involve⎇the⎇introduction⎇of⎇entities⎇not⎇present⎇in
675⎇⎇␈F1the⎇statement⎇of⎇the⎇problem.⎇ For⎇example,⎇proving⎇that⎇an⎇8⎇by⎇8⎇square⎇board⎇with⎇two
676⎇⎇␈F1diagonally⎇opposite⎇squares⎇removed⎇cannot⎇be⎇covered⎇by⎇dominoes⎇each⎇covering⎇two⎇adjacent
677⎇⎇␈F1squares⎇involves⎇introducing⎇the⎇colors⎇of⎇the⎇squares⎇and⎇the⎇fact⎇that⎇a⎇dominoe⎇covers⎇two
678⎇⎇␈F1squares⎇of⎇opposite⎇color.⎇ We⎇want⎇to⎇regard⎇this⎇as⎇a⎇creative⎇proof⎇even⎇though⎇it⎇might⎇be
679⎇⎇␈F1quite⎇easy⎇for⎇an⎇experienced⎇combinatorist.
680⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16423⎇␈←
681⎇⎇␈F3␈→841⎇0⎇2⎇0⎇⎇OTHER VIEWS ABOUT MIND␈←
682⎇⎇␈F1␈=80⎇The⎇fundamental⎇difference⎇in⎇point⎇of⎇view⎇between⎇this⎇paper⎇and⎇most⎇philosophy⎇is
683⎇⎇␈F1that⎇we⎇are⎇motivated⎇by⎇the⎇problem⎇of⎇designing⎇an⎇artificial⎇intelligence.⎇ Therefore,⎇our
684⎇⎇␈F1attitude⎇towards⎇a⎇concept⎇like⎇␈F2belief␈F1⎇is⎇determined⎇by⎇trying⎇to⎇decide⎇what⎇ways⎇of⎇acquiring⎇and
685⎇⎇␈F1using⎇beliefs⎇will⎇lead⎇to⎇intelligent⎇behavior.⎇ Then⎇we⎇discover⎇that⎇much⎇that⎇one⎇intelligence
686⎇⎇␈F1can⎇find⎇out⎇about⎇another⎇can⎇be⎇expressed⎇by⎇ascribing⎇beliefs⎇to⎇it.
687⎇⎇␈F1␈=80⎇A⎇negative⎇view⎇of⎇empiricism⎇seems⎇dictated⎇from⎇the⎇apparent⎇artificiality⎇of⎇designing⎇an
688⎇⎇␈F1empiricist⎇computer⎇program⎇to⎇operate⎇in⎇the⎇real⎇world.⎇ Namely,⎇we⎇plan⎇to⎇provide⎇our
689⎇⎇␈F1program⎇with⎇certain⎇senses,⎇but⎇we⎇have⎇no⎇way⎇of⎇being⎇sure⎇that⎇the⎇world⎇in⎇which⎇we⎇are
690⎇⎇␈F1putting⎇the⎇machine⎇is⎇constructable⎇from⎇the⎇sense⎇impressions⎇it⎇will⎇have.⎇ Whether⎇it⎇will⎇ever
691⎇⎇␈F1know⎇some⎇fact⎇about⎇the⎇world⎇is⎇contingent,⎇so⎇we⎇are⎇not⎇inclined⎇to⎇build⎇into⎇it⎇the⎇notion
692⎇⎇␈F1that⎇what⎇it⎇can't⎇know⎇about⎇doesn't⎇exist.
693⎇⎇␈F1␈=80⎇The⎇philosophical⎇views⎇most⎇sympathetic⎇to⎇our⎇approach⎇are⎇some⎇expressed⎇by⎇Carnap
694⎇⎇␈F1in⎇some⎇of⎇the⎇discursive⎇sections⎇of⎇(Carnap⎇1956).
695⎇⎇␈F1␈=80⎇Hilary⎇Putnam⎇(1961)⎇argues⎇that⎇the⎇classical⎇mind-body⎇problems⎇are⎇just⎇as⎇acute⎇for
696⎇⎇␈F1machines⎇as⎇for⎇men.⎇ Some⎇of⎇his⎇arguments⎇are⎇more⎇explicit⎇than⎇any⎇given⎇here,⎇but⎇in⎇that
697⎇⎇␈F1paper,⎇he⎇doesn't⎇try⎇to⎇solve⎇the⎇problems⎇for⎇machines.
698⎇⎇␈F1␈=80⎇D.M.⎇Armstrong⎇(1968)⎇␈F2"attempts⎇to⎇show⎇that⎇there⎇are⎇no⎇valid⎇philosophical⎇or⎇logical
699⎇⎇␈F2reasons⎇for⎇rejecting⎇the⎇identification⎇of⎇mind⎇and⎇brain."␈F1⎇He⎇does⎇this⎇by⎇proposing⎇definitions⎇of
700⎇⎇␈F1mental⎇concepts⎇in⎇terms⎇of⎇the⎇state⎇of⎇the⎇brain.⎇ Fundamentally,⎇I⎇agree⎇with⎇him⎇and⎇think
701⎇⎇␈F1that⎇such⎇a⎇program⎇of⎇definition⎇can⎇be⎇carried⎇out,⎇but⎇it⎇seems⎇to⎇me⎇that⎇his⎇methods⎇for
702⎇⎇␈F1defining⎇mental⎇qualities⎇as⎇brain⎇states⎇are⎇too⎇weak⎇even⎇for⎇defining⎇properties⎇of⎇computer
703⎇⎇␈F1programs.⎇ While⎇he⎇goes⎇beyond⎇behavioral⎇definitions⎇as⎇such,⎇he⎇relies⎇on⎇dispositional⎇states.
704⎇⎇␈F1␈=80⎇This⎇paper⎇is⎇partly⎇an⎇attempt⎇to⎇do⎇what⎇Ryle⎇(1949)⎇says⎇can't⎇be⎇done⎇and⎇shouldn't⎇be
705⎇⎇␈F1attempted⎇-⎇namely⎇to⎇define⎇mental⎇qualities⎇in⎇terms⎇of⎇states⎇of⎇a⎇machine.⎇ The⎇attempt⎇is
706⎇⎇␈F1based⎇on⎇methods⎇of⎇which⎇he⎇would⎇not⎇approve;⎇he⎇implicitly⎇requires⎇first⎇order⎇definitions,
707⎇⎇␈F1and⎇he⎇implicitly⎇requires⎇that⎇definitions⎇be⎇made⎇in⎇terms⎇of⎇the⎇state⎇of⎇the⎇world⎇and⎇not⎇in
708⎇⎇␈F1terms⎇of⎇approximate⎇theories.
709⎇⎇␈F1␈=80⎇His⎇final⎇view⎇of⎇the⎇proper⎇subject⎇matter⎇of⎇epistemology⎇is⎇too⎇narrow⎇to⎇help
710⎇⎇␈F1researchers⎇in⎇artificial⎇intelligence.⎇ Namely,⎇we⎇need⎇help⎇in⎇expressing⎇those⎇facts⎇about⎇the
711⎇⎇␈F1world⎇that⎇can⎇be⎇obtained⎇in⎇an⎇ordinary⎇situation⎇by⎇an⎇ordinary⎇person⎇and⎇the⎇general⎇facts
712⎇⎇␈F1about⎇the⎇world⎇will⎇enable⎇our⎇program⎇to⎇decide⎇to⎇call⎇a⎇travel⎇agent⎇to⎇find⎇out⎇how⎇to⎇get⎇to
713⎇⎇␈F1Boston.
714⎇⎇␈F1␈=80⎇Donald⎇Davidson⎇(1973)⎇undertakes⎇to⎇show,⎇␈F2"There⎇is⎇no⎇important⎇sense⎇in⎇which
715⎇⎇␈F2psychology⎇can⎇be⎇reduced⎇to⎇the⎇physical⎇sciences"␈F1.⎇ He⎇proceeds⎇by⎇arguing⎇that⎇the⎇mental
716⎇⎇␈F1qualities⎇of⎇a⎇hypothetical⎇artificial⎇man⎇could⎇not⎇be⎇defined⎇physically⎇even⎇if⎇we⎇knew⎇the
717⎇⎇␈F1details⎇of⎇its⎇physical⎇structure.
718⎇⎇␈F1␈=80⎇One⎇sense⎇of⎇Davidson's⎇statement⎇does⎇not⎇require⎇the⎇arguments⎇he⎇gives.⎇ There⎇are
719⎇⎇␈F1many⎇universal⎇computing⎇elements⎇-⎇relays,⎇neurons,⎇gates⎇and⎇flip-flops,⎇and⎇physics⎇tells⎇us
720⎇⎇␈F1many⎇ways⎇of⎇constructing⎇them.⎇ Any⎇information⎇processing⎇system⎇that⎇can⎇be⎇constructed⎇of
721⎇⎇␈F1one⎇kind⎇of⎇element⎇can⎇be⎇constructed⎇of⎇any⎇other.⎇ Therefore,⎇physics⎇tells⎇us⎇nothing⎇about
722⎇⎇␈F1␈→1280⎇0⎇1⎇0⎇⎇16424⎇␈←
723⎇⎇␈F1what⎇information⎇processes⎇exist⎇in⎇nature⎇or⎇can⎇be⎇constructed.⎇ Computer⎇science⎇is⎇no⎇more
724⎇⎇␈F1reducible⎇to⎇physics⎇than⎇is⎇psychology.
725⎇⎇␈F1␈=80⎇However,⎇Davidson⎇also⎇argues⎇that⎇the⎇mental⎇states⎇of⎇an⎇organism⎇are⎇not⎇describable⎇in
726⎇⎇␈F1terms⎇of⎇its⎇physical⎇structure,⎇and⎇I⎇take⎇this⎇to⎇assert⎇also⎇that⎇they⎇are⎇not⎇describable⎇in⎇terms
727⎇⎇␈F1of⎇its⎇construction⎇from⎇logical⎇elements.⎇ I⎇would⎇take⎇his⎇arguments⎇as⎇showing⎇that⎇mental
728⎇⎇␈F1qualities⎇don't⎇have⎇what⎇I⎇have⎇called⎇first⎇order⎇structural⎇definitions.⎇ I⎇don't⎇think⎇they⎇apply
729⎇⎇␈F1to⎇second⎇order⎇definitions.
730⎇⎇␈F1␈=80⎇D.C.⎇Dennett⎇(1971)⎇expresses⎇views⎇very⎇similar⎇to⎇mine⎇about⎇the⎇reasons⎇for⎇ascribing
731⎇⎇␈F1mental⎇qualities⎇to⎇machines.⎇ However,⎇the⎇present⎇paper⎇emphasizes⎇criteria⎇for⎇ascribing
732⎇⎇␈F1particular⎇mental⎇qualities⎇to⎇particular⎇machines⎇rather⎇than⎇the⎇general⎇proposition⎇that⎇mental
733⎇⎇␈F1qualities⎇may⎇be⎇ascribed.⎇ I⎇think⎇that⎇the⎇chess⎇programs⎇Dennett⎇discusses⎇have⎇more⎇limited
734⎇⎇␈F1mental⎇structures⎇than⎇he⎇seems⎇to⎇ascribe⎇to⎇them.⎇ Thus⎇their⎇␈F2beliefs␈F1⎇almost⎇always⎇concern
735⎇⎇␈F1particular⎇positions,⎇and⎇they⎇␈F2believe␈F1⎇almost⎇no⎇general⎇propositions⎇about⎇chess,⎇and⎇this⎇accounts
736⎇⎇␈F1for⎇many⎇of⎇their⎇weaknesses.⎇ Intuitively,⎇this⎇is⎇well⎇understood⎇by⎇researchers⎇in⎇computer⎇game
737⎇⎇␈F1playing,⎇and⎇providing⎇the⎇program⎇with⎇a⎇way⎇of⎇representing⎇general⎇facts⎇about⎇chess⎇and
738⎇⎇␈F1even⎇general⎇facts⎇about⎇particular⎇positions⎇is⎇a⎇major⎇unsolved⎇problem.⎇ For⎇example,⎇no
739⎇⎇␈F1present⎇program⎇can⎇represent⎇the⎇assertion⎇␈F2"Black⎇has⎇a⎇backward⎇pawn⎇on⎇his⎇Q3⎇and⎇white⎇may
740⎇⎇␈F2be⎇able⎇to⎇cramp⎇black's⎇position⎇by⎇putting⎇pressure⎇on⎇it"␈F1.⎇ Such⎇a⎇representation⎇would⎇require
741⎇⎇␈F1rules⎇that⎇permit⎇such⎇a⎇statement⎇to⎇be⎇derived⎇in⎇appropriate⎇positions⎇and⎇would⎇guide⎇the
742⎇⎇␈F1examination⎇of⎇possible⎇moves⎇in⎇accordance⎇with⎇it.
743⎇⎇␈F1␈=80⎇I⎇would⎇also⎇distinguish⎇between⎇believing⎇the⎇laws⎇of⎇logic⎇and⎇merely⎇using⎇them⎇(see
744⎇⎇␈F1Dennett,⎇p.⎇95).⎇ The⎇former⎇requires⎇a⎇language⎇that⎇can⎇express⎇sentences⎇about⎇sentences⎇and
745⎇⎇␈F1which⎇contains⎇some⎇kind⎇of⎇reflexion⎇principle.⎇ Many⎇present⎇problem⎇solving⎇programs⎇can⎇use
746⎇⎇␈F1␈F2modus⎇ponens␈F1⎇but⎇cannot⎇reason⎇about⎇their⎇own⎇ability⎇to⎇use⎇new⎇facts⎇in⎇a⎇way⎇that
747⎇⎇␈F1corresponds⎇to⎇believing⎇␈F2modus⎇ponens␈F1.